Test Report: Docker_Windows 21997

                    
                      4e6ec0ce1ba9ad510ab2048b3373e13c9f965153:2025-12-05:42642
                    
                

Test fail (34/427)

Order failed test Duration
67 TestErrorSpam/setup 46.84
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 521.9
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 376.22
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 54.09
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 54.47
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 54.24
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 743.95
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 53.93
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 20.21
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 4.22
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 122.37
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 243.37
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 22.45
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 52.65
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.11
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.5
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.52
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.53
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.52
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.48
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell 2.9
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 20.18
360 TestKubernetesUpgrade 850.32
406 TestStartStop/group/no-preload/serial/FirstStart 528.97
437 TestStartStop/group/newest-cni/serial/FirstStart 537.34
448 TestStartStop/group/no-preload/serial/DeployApp 5.38
449 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 110.32
465 TestStartStop/group/no-preload/serial/SecondStart 379.99
488 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 115.59
496 TestStartStop/group/newest-cni/serial/SecondStart 384.73
507 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 545.55
511 TestStartStop/group/newest-cni/serial/Pause 13.58
512 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 229.47
x
+
TestErrorSpam/setup (46.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-472400 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-472400 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 --driver=docker: (46.8368179s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-472400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=21997
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-472400" primary control-plane node in "nospam-472400" cluster
* Pulling base image v0.0.48-1764169655-21974 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-472400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (46.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (521.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0
E1205 06:27:23.891940    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:22.600330    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:22.607293    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:22.619092    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:22.641522    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:22.683862    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:22.765517    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:22.927692    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:23.250159    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:23.892697    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:25.174437    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:27.736438    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:32.858578    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:43.101506    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:31:03.583852    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:31:44.546341    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:32:23.895692    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:33:06.470521    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:33:46.968687    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m39.1347096s)

                                                
                                                
-- stdout --
	* [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - HTTP_PROXY=localhost:55388
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=localhost:55388
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-247800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-247800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000755308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129822s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129822s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 6 (625.4596ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:34:41.815797    6048 status.go:458] kubeconfig endpoint: get endpoint: "functional-247800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.0934309s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image save kicbase/echo-server:functional-088800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image rm kicbase/echo-server:functional-088800 --alsologtostderr                                                                        │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ ssh            │ functional-088800 ssh sudo cat /etc/test/nested/copy/8036/hosts                                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image save --daemon kicbase/echo-server:functional-088800 --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ start          │ -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ start          │ -p functional-088800 --dry-run --alsologtostderr -v=1 --driver=docker                                                                                     │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ start          │ -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-088800 --alsologtostderr -v=1                                                                                            │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ service        │ functional-088800 service hello-node --url                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format short --alsologtostderr                                                                                               │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format yaml --alsologtostderr                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ ssh            │ functional-088800 ssh pgrep buildkitd                                                                                                                     │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ image          │ functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr                                                    │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format json --alsologtostderr                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format table --alsologtostderr                                                                                               │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ delete         │ -p functional-088800                                                                                                                                      │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:26 UTC │
	│ start          │ -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:26 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:26:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:26:02.167928    2688 out.go:360] Setting OutFile to fd 1452 ...
	I1205 06:26:02.216749    2688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:26:02.216749    2688 out.go:374] Setting ErrFile to fd 2012...
	I1205 06:26:02.216749    2688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:26:02.230750    2688 out.go:368] Setting JSON to false
	I1205 06:26:02.233654    2688 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7220,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:26:02.233788    2688 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:26:02.243976    2688 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:26:02.251268    2688 notify.go:221] Checking for updates...
	I1205 06:26:02.253613    2688 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:26:02.255751    2688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:26:02.259487    2688 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:26:02.261829    2688 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:26:02.263940    2688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:26:02.266947    2688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:26:02.384379    2688 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:26:02.387346    2688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:26:02.628024    2688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-05 06:26:02.606918007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:26:02.633396    2688 out.go:179] * Using the docker driver based on user configuration
	I1205 06:26:02.636218    2688 start.go:309] selected driver: docker
	I1205 06:26:02.636295    2688 start.go:927] validating driver "docker" against <nil>
	I1205 06:26:02.636323    2688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:26:02.727991    2688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:26:02.978571    2688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-05 06:26:02.957291427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:26:02.979100    2688 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:26:02.979929    2688 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:26:02.982860    2688 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 06:26:02.985737    2688 cni.go:84] Creating CNI manager for ""
	I1205 06:26:02.985737    2688 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:26:02.985737    2688 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	W1205 06:26:02.985737    2688 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	W1205 06:26:02.985737    2688 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	I1205 06:26:02.985737    2688 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:26:02.988810    2688 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:26:02.992767    2688 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:26:02.995768    2688 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:26:03.004195    2688 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:26:03.004195    2688 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:26:03.050508    2688 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:26:03.082328    2688 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:26:03.082328    2688 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:26:03.301985    2688 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:26:03.301985    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:26:03.301985    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:26:03.301985    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:26:03.301985    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:26:03.302533    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:26:03.302616    2688 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:26:03.302616    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:26:03.302616    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:26:03.302616    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:26:03.302616    2688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json: {Name:mk2047e8ba0b949dee812cc5f0204d8500323071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:03.303907    2688 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:26:03.304482    2688 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:03.304790    2688 start.go:364] duration metric: took 307.8µs to acquireMachinesLock for "functional-247800"
	I1205 06:26:03.304954    2688 start.go:93] Provisioning new machine with config: &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 06:26:03.305091    2688 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:26:03.310367    2688 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1205 06:26:03.310915    2688 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	W1205 06:26:03.311042    2688 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:55388 to docker env.
	I1205 06:26:03.311042    2688 start.go:159] libmachine.API.Create for "functional-247800" (driver="docker")
	I1205 06:26:03.311042    2688 client.go:173] LocalClient.Create starting
	I1205 06:26:03.311588    2688 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 06:26:03.327430    2688 main.go:143] libmachine: Decoding PEM data...
	I1205 06:26:03.327430    2688 main.go:143] libmachine: Parsing certificate...
	I1205 06:26:03.328273    2688 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 06:26:03.343856    2688 main.go:143] libmachine: Decoding PEM data...
	I1205 06:26:03.344057    2688 main.go:143] libmachine: Parsing certificate...
	I1205 06:26:03.350433    2688 cli_runner.go:164] Run: docker network inspect functional-247800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 06:26:03.409378    2688 cli_runner.go:211] docker network inspect functional-247800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 06:26:03.413395    2688 network_create.go:284] running [docker network inspect functional-247800] to gather additional debugging logs...
	I1205 06:26:03.413395    2688 cli_runner.go:164] Run: docker network inspect functional-247800
	W1205 06:26:03.618354    2688 cli_runner.go:211] docker network inspect functional-247800 returned with exit code 1
	I1205 06:26:03.618354    2688 network_create.go:287] error running [docker network inspect functional-247800]: docker network inspect functional-247800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-247800 not found
	I1205 06:26:03.618412    2688 network_create.go:289] output of [docker network inspect functional-247800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-247800 not found
	
	** /stderr **
	I1205 06:26:03.623237    2688 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:26:03.698240    2688 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bfe480}
	I1205 06:26:03.698240    2688 network_create.go:124] attempt to create docker network functional-247800 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 06:26:03.704248    2688 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-247800 functional-247800
	I1205 06:26:04.280468    2688 network_create.go:108] docker network functional-247800 192.168.49.0/24 created
	I1205 06:26:04.280468    2688 kic.go:121] calculated static IP "192.168.49.2" for the "functional-247800" container
	I1205 06:26:04.289850    2688 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 06:26:04.570185    2688 cli_runner.go:164] Run: docker volume create functional-247800 --label name.minikube.sigs.k8s.io=functional-247800 --label created_by.minikube.sigs.k8s.io=true
	I1205 06:26:04.773490    2688 oci.go:103] Successfully created a docker volume functional-247800
	I1205 06:26:04.778197    2688 cli_runner.go:164] Run: docker run --rm --name functional-247800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-247800 --entrypoint /usr/bin/test -v functional-247800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 06:26:05.929115    2688 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:05.929115    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:26:05.929712    2688 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.6269934s
	I1205 06:26:05.929712    2688 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:26:05.933012    2688 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:05.933012    2688 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:05.933012    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:26:05.933012    2688 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.6309893s
	I1205 06:26:05.933012    2688 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:26:05.933012    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:26:05.933542    2688 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.6308892s
	I1205 06:26:05.933542    2688 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:26:05.936123    2688 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:05.936123    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:26:05.936123    2688 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.6341004s
	I1205 06:26:05.936123    2688 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:26:05.937335    2688 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:05.937335    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:26:05.937335    2688 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.6347321s
	I1205 06:26:05.937335    2688 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:26:05.942906    2688 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:05.942906    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:26:05.942906    2688 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.6408839s
	I1205 06:26:05.942906    2688 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:26:05.967826    2688 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:05.967826    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:26:05.967826    2688 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.665172s
	I1205 06:26:05.967826    2688 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:26:06.019825    2688 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:26:06.019825    2688 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:26:06.019825    2688 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.7178014s
	I1205 06:26:06.019825    2688 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:26:06.019825    2688 cache.go:87] Successfully saved all images to host disk.
	I1205 06:26:06.651858    2688 cli_runner.go:217] Completed: docker run --rm --name functional-247800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-247800 --entrypoint /usr/bin/test -v functional-247800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.8736355s)
	I1205 06:26:06.651858    2688 oci.go:107] Successfully prepared a docker volume functional-247800
	I1205 06:26:06.651858    2688 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:26:06.655946    2688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:26:06.876196    2688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-05 06:26:06.858676947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:26:06.880185    2688 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:26:07.128169    2688 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-247800 --name functional-247800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-247800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-247800 --network functional-247800 --ip 192.168.49.2 --volume functional-247800:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 06:26:07.737885    2688 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Running}}
	I1205 06:26:07.798127    2688 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:26:07.852128    2688 cli_runner.go:164] Run: docker exec functional-247800 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:26:07.956392    2688 oci.go:144] the created container "functional-247800" has a running status.
	I1205 06:26:07.956436    2688 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa...
	I1205 06:26:08.065584    2688 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:26:08.142192    2688 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:26:08.210325    2688 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:26:08.210325    2688 kic_runner.go:114] Args: [docker exec --privileged functional-247800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:26:08.360125    2688 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa...
	I1205 06:26:10.460981    2688 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:26:10.517146    2688 machine.go:94] provisionDockerMachine start ...
	I1205 06:26:10.520170    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:10.579422    2688 main.go:143] libmachine: Using SSH client type: native
	I1205 06:26:10.592420    2688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:26:10.592420    2688 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:26:10.778555    2688 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:26:10.778597    2688 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:26:10.782813    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:10.837698    2688 main.go:143] libmachine: Using SSH client type: native
	I1205 06:26:10.838156    2688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:26:10.838156    2688 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:26:11.033604    2688 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:26:11.037127    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:11.092786    2688 main.go:143] libmachine: Using SSH client type: native
	I1205 06:26:11.092786    2688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:26:11.092786    2688 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:26:11.279484    2688 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:26:11.279484    2688 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:26:11.279484    2688 ubuntu.go:190] setting up certificates
	I1205 06:26:11.279484    2688 provision.go:84] configureAuth start
	I1205 06:26:11.282915    2688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:26:11.337912    2688 provision.go:143] copyHostCerts
	I1205 06:26:11.337971    2688 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:26:11.337971    2688 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:26:11.337971    2688 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:26:11.338982    2688 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:26:11.338982    2688 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:26:11.339536    2688 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:26:11.352426    2688 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:26:11.352426    2688 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:26:11.352426    2688 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:26:11.353334    2688 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:26:11.476283    2688 provision.go:177] copyRemoteCerts
	I1205 06:26:11.480383    2688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:26:11.483275    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:11.540560    2688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:26:11.672669    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:26:11.699600    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:26:11.726568    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:26:11.751732    2688 provision.go:87] duration metric: took 472.1803ms to configureAuth
	I1205 06:26:11.751760    2688 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:26:11.751788    2688 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:26:11.755186    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:11.812387    2688 main.go:143] libmachine: Using SSH client type: native
	I1205 06:26:11.813569    2688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:26:11.813569    2688 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:26:12.000336    2688 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:26:12.000377    2688 ubuntu.go:71] root file system type: overlay
	I1205 06:26:12.000491    2688 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:26:12.004348    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:12.059515    2688 main.go:143] libmachine: Using SSH client type: native
	I1205 06:26:12.059944    2688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:26:12.059944    2688 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:26:12.253023    2688 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:26:12.256442    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:12.311245    2688 main.go:143] libmachine: Using SSH client type: native
	I1205 06:26:12.311741    2688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:26:12.311766    2688 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:26:13.684899    2688 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 06:26:12.247573865 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 06:26:13.685424    2688 machine.go:97] duration metric: took 3.1682336s to provisionDockerMachine
	I1205 06:26:13.685461    2688 client.go:176] duration metric: took 10.3742742s to LocalClient.Create
	I1205 06:26:13.685461    2688 start.go:167] duration metric: took 10.3742742s to libmachine.API.Create "functional-247800"
	I1205 06:26:13.685461    2688 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:26:13.685461    2688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:26:13.689944    2688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:26:13.693145    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:13.748642    2688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:26:13.877032    2688 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:26:13.884888    2688 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:26:13.884888    2688 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:26:13.884888    2688 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:26:13.884888    2688 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:26:13.886034    2688 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:26:13.886373    2688 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:26:13.890231    2688 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:26:13.903935    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:26:13.932208    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:26:13.958794    2688 start.go:296] duration metric: took 273.3289ms for postStartSetup
	I1205 06:26:13.964141    2688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:26:14.022429    2688 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:26:14.028526    2688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:26:14.032017    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:14.087579    2688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:26:14.224853    2688 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:26:14.234718    2688 start.go:128] duration metric: took 10.9294202s to createHost
	I1205 06:26:14.234718    2688 start.go:83] releasing machines lock for "functional-247800", held for 10.9297755s
	I1205 06:26:14.239677    2688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:26:14.300530    2688 out.go:179] * Found network options:
	I1205 06:26:14.303489    2688 out.go:179]   - HTTP_PROXY=localhost:55388
	W1205 06:26:14.305418    2688 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1205 06:26:14.307559    2688 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1205 06:26:14.310817    2688 out.go:179]   - HTTP_PROXY=localhost:55388
	I1205 06:26:14.313541    2688 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:26:14.317078    2688 ssh_runner.go:195] Run: cat /version.json
	I1205 06:26:14.317078    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:14.321097    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:14.371783    2688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:26:14.373080    2688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	W1205 06:26:14.497168    2688 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:26:14.502080    2688 ssh_runner.go:195] Run: systemctl --version
	I1205 06:26:14.516762    2688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:26:14.524703    2688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:26:14.528941    2688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:26:14.577131    2688 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 06:26:14.577131    2688 start.go:496] detecting cgroup driver to use...
	I1205 06:26:14.577131    2688 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:26:14.577131    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:26:14.605072    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1205 06:26:14.607626    2688 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:26:14.607626    2688 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:26:14.626535    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:26:14.642858    2688 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:26:14.647126    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:26:14.668391    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:26:14.687329    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:26:14.706883    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:26:14.727392    2688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:26:14.746299    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:26:14.766439    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:26:14.784047    2688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:26:14.802541    2688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:26:14.819124    2688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:26:14.835132    2688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:26:14.967949    2688 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:26:15.120695    2688 start.go:496] detecting cgroup driver to use...
	I1205 06:26:15.120695    2688 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:26:15.127035    2688 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:26:15.150998    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:26:15.174786    2688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:26:15.242688    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:26:15.265217    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:26:15.281643    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:26:15.309274    2688 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:26:15.319944    2688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:26:15.332934    2688 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:26:15.356638    2688 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:26:15.507025    2688 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:26:15.642949    2688 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:26:15.642949    2688 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:26:15.667538    2688 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:26:15.690569    2688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:26:15.848187    2688 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:26:16.718081    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:26:16.742396    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:26:16.765197    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:26:16.791832    2688 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:26:16.937088    2688 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:26:17.078333    2688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:26:17.221053    2688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:26:17.245503    2688 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:26:17.266430    2688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:26:17.434304    2688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:26:17.544671    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:26:17.574917    2688 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:26:17.580462    2688 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:26:17.586690    2688 start.go:564] Will wait 60s for crictl version
	I1205 06:26:17.591370    2688 ssh_runner.go:195] Run: which crictl
	I1205 06:26:17.605201    2688 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:26:17.649568    2688 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:26:17.652941    2688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:26:17.694041    2688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:26:17.734382    2688 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:26:17.737523    2688 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:26:17.870489    2688 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:26:17.875441    2688 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:26:17.883774    2688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:26:17.902524    2688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:26:17.956845    2688 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:26:17.956845    2688 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:26:17.961571    2688 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:26:17.994748    2688 docker.go:691] Got preloaded images: 
	I1205 06:26:17.994748    2688 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1205 06:26:17.994748    2688 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 06:26:18.006094    2688 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:26:18.011355    2688 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:26:18.015304    2688 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:26:18.015304    2688 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:26:18.019258    2688 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:26:18.019258    2688 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:26:18.025171    2688 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 06:26:18.026952    2688 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:26:18.030018    2688 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:26:18.031232    2688 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:26:18.039258    2688 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:26:18.040554    2688 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 06:26:18.041855    2688 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:26:18.045697    2688 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:26:18.049247    2688 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:26:18.056225    2688 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1205 06:26:18.083682    2688 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:26:18.133038    2688 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:26:18.181486    2688 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:26:18.232106    2688 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:26:18.280465    2688 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:26:18.330498    2688 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:26:18.381320    2688 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:26:18.431165    2688 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1205 06:26:18.548467    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 06:26:18.549145    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:26:18.580949    2688 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1205 06:26:18.581043    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:26:18.581043    2688 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:26:18.582290    2688 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1205 06:26:18.582390    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:26:18.582416    2688 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:26:18.585673    2688 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1205 06:26:18.585673    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:26:18.586544    2688 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:26:18.594802    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 06:26:18.609963    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:26:18.610962    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:26:18.640211    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:26:18.646872    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:26:18.647440    2688 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1205 06:26:18.647440    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:26:18.647484    2688 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:26:18.648134    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:26:18.648134    2688 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1205 06:26:18.648134    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:26:18.648134    2688 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1205 06:26:18.652793    2688 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:26:18.653736    2688 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1205 06:26:18.666358    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:26:18.667017    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:26:18.741353    2688 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1205 06:26:18.741353    2688 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1205 06:26:18.741353    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:26:18.741353    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:26:18.741407    2688 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:26:18.741407    2688 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:26:18.746035    2688 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:26:18.747595    2688 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:26:18.757659    2688 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1205 06:26:18.757659    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:26:18.757702    2688 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:26:18.761434    2688 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:26:18.830963    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:26:18.845844    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:26:18.845844    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 06:26:18.845844    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 06:26:18.845844    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1205 06:26:18.845844    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1205 06:26:18.852705    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:26:18.857716    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:26:18.868694    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 06:26:18.874920    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:26:18.932768    2688 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:26:18.932768    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:26:18.954003    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:26:18.961509    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:26:18.961509    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 06:26:18.961509    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1205 06:26:18.977559    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:26:19.053653    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 06:26:19.054261    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1205 06:26:19.063320    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 06:26:19.063320    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1205 06:26:19.117628    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 06:26:19.117860    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1205 06:26:19.128142    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 06:26:19.128142    2688 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 06:26:19.128142    2688 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:26:19.128142    2688 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:26:19.128142    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1205 06:26:19.132257    2688 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:26:19.233234    2688 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 06:26:19.233234    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1205 06:26:19.234259    2688 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:26:19.253225    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:26:19.447558    2688 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 06:26:19.448573    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1205 06:26:19.454559    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1205 06:26:20.309820    2688 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:26:20.309820    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1205 06:26:21.061602    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1205 06:26:21.061602    2688 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:26:21.061602    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1205 06:26:23.566351    2688 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (2.5046721s)
	I1205 06:26:23.566351    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1205 06:26:23.566439    2688 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:26:23.566460    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1205 06:26:26.296345    2688 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.7298463s)
	I1205 06:26:26.296345    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1205 06:26:26.296345    2688 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:26:26.296345    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1205 06:26:27.523630    2688 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (1.2272678s)
	I1205 06:26:27.523630    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 06:26:27.523630    2688 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:26:27.523630    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1205 06:26:28.887758    2688 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.3641095s)
	I1205 06:26:28.887758    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1205 06:26:28.887758    2688 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:26:28.887758    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1205 06:26:31.088481    2688 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (2.2006541s)
	I1205 06:26:31.088509    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1205 06:26:31.088575    2688 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:26:31.088575    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1205 06:26:33.067252    2688 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.9786484s)
	I1205 06:26:33.067252    2688 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1205 06:26:33.067252    2688 cache_images.go:125] Successfully loaded all cached images
	I1205 06:26:33.067252    2688 cache_images.go:94] duration metric: took 15.0722923s to LoadCachedImages
	I1205 06:26:33.067252    2688 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:26:33.067252    2688 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:26:33.072277    2688 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:26:33.148647    2688 cni.go:84] Creating CNI manager for ""
	I1205 06:26:33.148647    2688 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:26:33.148647    2688 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:26:33.148647    2688 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:26:33.149173    2688 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:26:33.153328    2688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:26:33.166449    2688 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 06:26:33.170352    2688 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:26:33.186506    2688 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1205 06:26:33.186506    2688 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1205 06:26:33.186506    2688 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1205 06:26:33.191353    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:26:33.214361    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 06:26:33.214361    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 06:26:33.224127    2688 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 06:26:33.224182    2688 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 06:26:33.224304    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1205 06:26:33.224304    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1205 06:26:33.236375    2688 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 06:26:33.301993    2688 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 06:26:33.301993    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1205 06:26:35.022253    2688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:26:35.035253    2688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:26:35.054521    2688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:26:35.076676    2688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1205 06:26:35.102163    2688 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:26:35.110138    2688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:26:35.129981    2688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:26:35.271680    2688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:26:35.293325    2688 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:26:35.293325    2688 certs.go:195] generating shared ca certs ...
	I1205 06:26:35.293325    2688 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:35.316508    2688 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:26:35.334050    2688 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:26:35.334050    2688 certs.go:257] generating profile certs ...
	I1205 06:26:35.334050    2688 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:26:35.334050    2688 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.crt with IP's: []
	I1205 06:26:35.419978    2688 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.crt ...
	I1205 06:26:35.419978    2688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.crt: {Name:mk2c8f0ae7ef79098b3d3b1b55fd95bebdb114ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:35.420975    2688 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key ...
	I1205 06:26:35.420975    2688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key: {Name:mk5d01ffcf76653f53f8fd38fe19bb3bbf8982b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:35.421973    2688 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:26:35.421973    2688 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt.870be15d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 06:26:35.517999    2688 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt.870be15d ...
	I1205 06:26:35.517999    2688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt.870be15d: {Name:mk1a27d323fd10b8bf439f1f8244acf1e717fc05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:35.517999    2688 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d ...
	I1205 06:26:35.517999    2688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d: {Name:mk546dd83a4de541274887882cda649311030407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:35.519001    2688 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt.870be15d -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt
	I1205 06:26:35.532999    2688 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key
	I1205 06:26:35.533998    2688 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:26:35.533998    2688 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt with IP's: []
	I1205 06:26:35.660465    2688 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt ...
	I1205 06:26:35.660465    2688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt: {Name:mk26a338b61db9b8b3476521857057bd4a519ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:35.661466    2688 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key ...
	I1205 06:26:35.661466    2688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key: {Name:mk4aaf3e7e19188d6865cfad685efbf9c4364ef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:26:35.674194    2688 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:26:35.674194    2688 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:26:35.674194    2688 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:26:35.675193    2688 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:26:35.675193    2688 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:26:35.675193    2688 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:26:35.675193    2688 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:26:35.676194    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:26:35.708721    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:26:35.733639    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:26:35.762259    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:26:35.786015    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:26:35.814189    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:26:35.838924    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:26:35.869136    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:26:35.902952    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:26:35.932390    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:26:35.957833    2688 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:26:35.987777    2688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:26:36.013428    2688 ssh_runner.go:195] Run: openssl version
	I1205 06:26:36.028874    2688 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:26:36.048167    2688 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:26:36.068169    2688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:26:36.078551    2688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:26:36.083759    2688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:26:36.131976    2688 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:26:36.149352    2688 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:26:36.166989    2688 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:26:36.186323    2688 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:26:36.205017    2688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:26:36.213433    2688 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:26:36.217930    2688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:26:36.269339    2688 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:26:36.286938    2688 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8036.pem /etc/ssl/certs/51391683.0
	I1205 06:26:36.306092    2688 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:26:36.323299    2688 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:26:36.341611    2688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:26:36.350322    2688 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:26:36.354368    2688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:26:36.401853    2688 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:26:36.419191    2688 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/80362.pem /etc/ssl/certs/3ec20f2e.0
	I1205 06:26:36.436659    2688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:26:36.444390    2688 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:26:36.444390    2688 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:26:36.448289    2688 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:26:36.485341    2688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:26:36.502102    2688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:26:36.514872    2688 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:26:36.519229    2688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:26:36.531925    2688 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:26:36.531925    2688 kubeadm.go:158] found existing configuration files:
	
	I1205 06:26:36.536089    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:26:36.549759    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:26:36.554000    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:26:36.571447    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:26:36.585473    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:26:36.590891    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:26:36.608450    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:26:36.622053    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:26:36.626226    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:26:36.646596    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:26:36.660514    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:26:36.665150    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:26:36.685597    2688 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:26:36.804157    2688 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 06:26:36.894387    2688 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:26:36.989929    2688 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:30:38.689768    2688 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 06:30:38.689768    2688 kubeadm.go:319] 
	I1205 06:30:38.689909    2688 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:30:38.694074    2688 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:30:38.694074    2688 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:30:38.694074    2688 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:30:38.694074    2688 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 06:30:38.694724    2688 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 06:30:38.694724    2688 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 06:30:38.694724    2688 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 06:30:38.694724    2688 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 06:30:38.694724    2688 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 06:30:38.694724    2688 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 06:30:38.694724    2688 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 06:30:38.695296    2688 kubeadm.go:319] CONFIG_INET: enabled
	I1205 06:30:38.695438    2688 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 06:30:38.695470    2688 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 06:30:38.695470    2688 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 06:30:38.695470    2688 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 06:30:38.695470    2688 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 06:30:38.695470    2688 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 06:30:38.695999    2688 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 06:30:38.696234    2688 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 06:30:38.696425    2688 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 06:30:38.696568    2688 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 06:30:38.696705    2688 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 06:30:38.696829    2688 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 06:30:38.696953    2688 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 06:30:38.696953    2688 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 06:30:38.696953    2688 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 06:30:38.696953    2688 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 06:30:38.696953    2688 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 06:30:38.696953    2688 kubeadm.go:319] OS: Linux
	I1205 06:30:38.696953    2688 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:30:38.697541    2688 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:30:38.697641    2688 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:30:38.697738    2688 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:30:38.697862    2688 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:30:38.697957    2688 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:30:38.698054    2688 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:30:38.698179    2688 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:30:38.698267    2688 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:30:38.698454    2688 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:30:38.698742    2688 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:30:38.698970    2688 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:30:38.699130    2688 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:30:38.701252    2688 out.go:252]   - Generating certificates and keys ...
	I1205 06:30:38.701252    2688 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:30:38.701252    2688 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-247800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-247800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:30:38.701830    2688 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:30:38.702804    2688 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:30:38.702804    2688 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:30:38.702804    2688 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:30:38.702804    2688 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:30:38.702804    2688 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:30:38.702804    2688 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:30:38.702804    2688 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:30:38.702804    2688 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:30:38.702804    2688 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:30:38.702804    2688 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:30:38.705798    2688 out.go:252]   - Booting up control plane ...
	I1205 06:30:38.706798    2688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:30:38.706798    2688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:30:38.706798    2688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:30:38.706798    2688 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:30:38.706798    2688 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:30:38.706798    2688 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:30:38.706798    2688 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:30:38.706798    2688 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:30:38.707842    2688 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:30:38.707842    2688 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:30:38.707842    2688 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000755308s
	I1205 06:30:38.707842    2688 kubeadm.go:319] 
	I1205 06:30:38.707842    2688 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:30:38.707842    2688 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:30:38.707842    2688 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:30:38.707842    2688 kubeadm.go:319] 
	I1205 06:30:38.707842    2688 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:30:38.707842    2688 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:30:38.708834    2688 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:30:38.708834    2688 kubeadm.go:319] 
	W1205 06:30:38.708834    2688 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-247800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-247800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000755308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:30:38.714135    2688 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 06:30:39.169000    2688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:30:39.187648    2688 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:30:39.192369    2688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:30:39.204354    2688 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:30:39.204354    2688 kubeadm.go:158] found existing configuration files:
	
	I1205 06:30:39.208932    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:30:39.222003    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:30:39.226445    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:30:39.242913    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:30:39.255494    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:30:39.259897    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:30:39.277964    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:30:39.292455    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:30:39.298143    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:30:39.319113    2688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:30:39.334224    2688 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:30:39.340257    2688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:30:39.359704    2688 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:30:39.473519    2688 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 06:30:39.553451    2688 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:30:39.650051    2688 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:34:40.475800    2688 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:34:40.475800    2688 kubeadm.go:319] 
	I1205 06:34:40.475893    2688 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:34:40.480077    2688 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:34:40.480077    2688 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:34:40.480702    2688 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:34:40.480702    2688 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 06:34:40.480702    2688 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 06:34:40.480702    2688 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 06:34:40.480702    2688 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 06:34:40.480702    2688 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 06:34:40.481232    2688 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 06:34:40.481348    2688 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 06:34:40.481348    2688 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 06:34:40.481348    2688 kubeadm.go:319] CONFIG_INET: enabled
	I1205 06:34:40.481348    2688 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 06:34:40.481348    2688 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 06:34:40.481348    2688 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 06:34:40.481348    2688 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 06:34:40.481965    2688 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 06:34:40.482028    2688 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 06:34:40.482972    2688 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 06:34:40.483109    2688 kubeadm.go:319] OS: Linux
	I1205 06:34:40.483199    2688 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:34:40.483244    2688 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:34:40.483244    2688 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:34:40.483244    2688 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:34:40.483244    2688 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:34:40.483244    2688 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:34:40.483244    2688 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:34:40.483244    2688 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:34:40.483776    2688 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:34:40.484064    2688 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:34:40.484200    2688 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:34:40.484200    2688 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:34:40.484200    2688 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:34:40.491111    2688 out.go:252]   - Generating certificates and keys ...
	I1205 06:34:40.491111    2688 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:34:40.491111    2688 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:34:40.491111    2688 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:34:40.491111    2688 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:34:40.491711    2688 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:34:40.491711    2688 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:34:40.491711    2688 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:34:40.491711    2688 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:34:40.491711    2688 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:34:40.491711    2688 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:34:40.491711    2688 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:34:40.491711    2688 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:34:40.491711    2688 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:34:40.491711    2688 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:34:40.491711    2688 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:34:40.492673    2688 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:34:40.492673    2688 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:34:40.492673    2688 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:34:40.492673    2688 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:34:40.498210    2688 out.go:252]   - Booting up control plane ...
	I1205 06:34:40.498210    2688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:34:40.498210    2688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:34:40.498210    2688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:34:40.499200    2688 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:34:40.499200    2688 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:34:40.499200    2688 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:34:40.499200    2688 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:34:40.499200    2688 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:34:40.499200    2688 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:34:40.500205    2688 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:34:40.500205    2688 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129822s
	I1205 06:34:40.500205    2688 kubeadm.go:319] 
	I1205 06:34:40.500205    2688 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:34:40.500205    2688 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:34:40.500205    2688 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:34:40.500205    2688 kubeadm.go:319] 
	I1205 06:34:40.500205    2688 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:34:40.500205    2688 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:34:40.500205    2688 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:34:40.500205    2688 kubeadm.go:319] 
	I1205 06:34:40.500205    2688 kubeadm.go:403] duration metric: took 8m4.0489338s to StartCluster
	I1205 06:34:40.501205    2688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:34:40.504195    2688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:34:40.562159    2688 cri.go:89] found id: ""
	I1205 06:34:40.562213    2688 logs.go:282] 0 containers: []
	W1205 06:34:40.562213    2688 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:34:40.562213    2688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:34:40.566543    2688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:34:40.610251    2688 cri.go:89] found id: ""
	I1205 06:34:40.610251    2688 logs.go:282] 0 containers: []
	W1205 06:34:40.610251    2688 logs.go:284] No container was found matching "etcd"
	I1205 06:34:40.610313    2688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:34:40.614324    2688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:34:40.649671    2688 cri.go:89] found id: ""
	I1205 06:34:40.649671    2688 logs.go:282] 0 containers: []
	W1205 06:34:40.649671    2688 logs.go:284] No container was found matching "coredns"
	I1205 06:34:40.649671    2688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:34:40.654891    2688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:34:40.697689    2688 cri.go:89] found id: ""
	I1205 06:34:40.697689    2688 logs.go:282] 0 containers: []
	W1205 06:34:40.697689    2688 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:34:40.697689    2688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:34:40.702332    2688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:34:40.741632    2688 cri.go:89] found id: ""
	I1205 06:34:40.741666    2688 logs.go:282] 0 containers: []
	W1205 06:34:40.741666    2688 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:34:40.741666    2688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:34:40.746046    2688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:34:40.785141    2688 cri.go:89] found id: ""
	I1205 06:34:40.785141    2688 logs.go:282] 0 containers: []
	W1205 06:34:40.785141    2688 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:34:40.785141    2688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:34:40.789842    2688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:34:40.831697    2688 cri.go:89] found id: ""
	I1205 06:34:40.831697    2688 logs.go:282] 0 containers: []
	W1205 06:34:40.831697    2688 logs.go:284] No container was found matching "kindnet"
	I1205 06:34:40.831697    2688 logs.go:123] Gathering logs for container status ...
	I1205 06:34:40.831697    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:34:40.878184    2688 logs.go:123] Gathering logs for kubelet ...
	I1205 06:34:40.878184    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:40.940434    2688 logs.go:123] Gathering logs for dmesg ...
	I1205 06:34:40.940434    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:34:40.969556    2688 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:34:40.969556    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:34:41.056795    2688 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:34:41.043187   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.048164   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.049928   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.051204   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.052317   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:34:41.043187   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.048164   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.049928   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.051204   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:41.052317   10303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:34:41.056795    2688 logs.go:123] Gathering logs for Docker ...
	I1205 06:34:41.056795    2688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1205 06:34:41.085988    2688 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129822s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:34:41.085988    2688 out.go:285] * 
	W1205 06:34:41.085988    2688 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129822s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:34:41.086787    2688 out.go:285] * 
	W1205 06:34:41.088985    2688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:34:41.093771    2688 out.go:203] 
	W1205 06:34:41.097010    2688 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129822s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:34:41.097010    2688 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:34:41.097010    2688 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:34:41.101032    2688 out.go:203] 
	
	
	==> Docker <==
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.547851034Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.547927343Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.547937345Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.547943245Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.547949346Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.547972149Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.548001853Z" level=info msg="Initializing buildkit"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.702009024Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.710864561Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.711070988Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.711083289Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:26:16 functional-247800 dockerd[1177]: time="2025-12-05T06:26:16.711086590Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:26:16 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:26:17 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:26:17 functional-247800 cri-dockerd[1471]: time="2025-12-05T06:26:17Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:26:17 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:34:42.820021   10458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:42.821102   10458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:42.823924   10458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:42.827851   10458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:42.828527   10458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000924] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000857] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000844] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000980] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001137] FS:  0000000000000000 GS:  0000000000000000
	[  +6.627225] CPU: 14 PID: 46535 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000774] RIP: 0033:0x7fbd57798b20
	[  +0.000648] Code: Unable to access opcode bytes at RIP 0x7fbd57798af6.
	[  +0.000879] RSP: 002b:00007ffc874abef0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001134] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000925] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000959] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000878] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000835] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000919] FS:  0000000000000000 GS:  0000000000000000
	[  +0.840992] CPU: 1 PID: 46644 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000857] RIP: 0033:0x7fd9eabc4b20
	[  +0.000416] Code: Unable to access opcode bytes at RIP 0x7fd9eabc4af6.
	[  +0.000673] RSP: 002b:00007ffc93460520 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001082] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001158] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001030] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001035] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000969] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000975] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:34:42 up  2:08,  0 user,  load average: 0.18, 0.45, 0.76
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:34:39 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:34:40 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 06:34:40 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:40 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:40 functional-247800 kubelet[10183]: E1205 06:34:40.530079   10183 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:34:40 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:34:40 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:34:41 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 06:34:41 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:41 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:41 functional-247800 kubelet[10315]: E1205 06:34:41.309830   10315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:34:41 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:34:41 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:34:41 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 06:34:41 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:41 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:42 functional-247800 kubelet[10343]: E1205 06:34:42.031871   10343 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:34:42 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:34:42 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:34:42 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 05 06:34:42 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:42 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:34:42 functional-247800 kubelet[10432]: E1205 06:34:42.780501   10432 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:34:42 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:34:42 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 6 (631.3968ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:34:43.868392   13928 status.go:458] kubeconfig endpoint: get endpoint: "functional-247800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (521.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (376.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1205 06:34:43.916059    8036 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-247800 --alsologtostderr -v=8
E1205 06:35:22.605006    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:50.315088    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:37:23.900003    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:40:22.609963    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-247800 --alsologtostderr -v=8: exit status 80 (6m11.677959s)

                                                
                                                
-- stdout --
	* [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:34:43.990318    3816 out.go:360] Setting OutFile to fd 932 ...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.034404    3816 out.go:374] Setting ErrFile to fd 1564...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.048005    3816 out.go:368] Setting JSON to false
	I1205 06:34:44.051134    3816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7741,"bootTime":1764908742,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:34:44.051134    3816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:34:44.054997    3816 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:34:44.057041    3816 notify.go:221] Checking for updates...
	I1205 06:34:44.057041    3816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:44.060615    3816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:34:44.063386    3816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:34:44.065338    3816 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:34:44.068100    3816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:34:44.070765    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:44.071546    3816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:34:44.185014    3816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:34:44.190117    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.434951    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.415349563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.438948    3816 out.go:179] * Using the docker driver based on existing profile
	I1205 06:34:44.442716    3816 start.go:309] selected driver: docker
	I1205 06:34:44.442716    3816 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.442716    3816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:34:44.449451    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.693650    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.673163701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.776708    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:44.776708    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:44.776708    3816 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.779353    3816 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:34:44.789396    3816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:34:44.793121    3816 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:34:44.794774    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:44.794774    3816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:34:44.844630    3816 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:44.871213    3816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:34:44.871213    3816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:34:45.153466    3816 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:45.154472    3816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:34:45.156762    3816 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:34:45.156819    3816 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:45.157157    3816 start.go:364] duration metric: took 122.3µs to acquireMachinesLock for "functional-247800"
	I1205 06:34:45.157157    3816 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:34:45.157157    3816 fix.go:54] fixHost starting: 
	I1205 06:34:45.165313    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:45.243648    3816 fix.go:112] recreateIfNeeded on functional-247800: state=Running err=<nil>
	W1205 06:34:45.243648    3816 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:34:45.267762    3816 out.go:252] * Updating the running docker "functional-247800" container ...
	I1205 06:34:45.269766    3816 machine.go:94] provisionDockerMachine start ...
	I1205 06:34:45.274766    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:45.449049    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:45.449049    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:45.449049    3816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:34:45.686505    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:45.686505    3816 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:34:45.691507    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:46.703091    3816 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800: (1.0115691s)
	I1205 06:34:46.706016    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:46.706016    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:46.706016    3816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:34:47.035712    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:47.042684    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.107199    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:47.107199    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:47.107199    3816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:34:47.308149    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:47.308197    3816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:34:47.308318    3816 ubuntu.go:190] setting up certificates
	I1205 06:34:47.308318    3816 provision.go:84] configureAuth start
	I1205 06:34:47.315253    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:47.380504    3816 provision.go:143] copyHostCerts
	I1205 06:34:47.381517    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:34:47.381517    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:34:47.382508    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:34:47.382508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:34:47.383507    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:34:47.384508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:34:47.385507    3816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:34:47.573727    3816 provision.go:177] copyRemoteCerts
	I1205 06:34:47.580429    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:34:47.585428    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.664000    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:47.815162    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1205 06:34:47.815801    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:34:47.849954    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1205 06:34:47.850956    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:34:47.876175    3816 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.876248    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:34:47.876248    3816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.7217371s
	I1205 06:34:47.876248    3816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:34:47.883801    3816 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.883881    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:34:47.883881    3816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.72937s
	I1205 06:34:47.883881    3816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:34:47.908586    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1205 06:34:47.909421    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:34:47.925048    3816 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.925345    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:34:47.925345    3816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.7708333s
	I1205 06:34:47.925345    3816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:34:47.926059    3816 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.926059    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:34:47.926059    3816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.7715471s
	I1205 06:34:47.926059    3816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:34:47.936781    3816 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.937442    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:34:47.937555    3816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.7830428s
	I1205 06:34:47.937609    3816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:34:47.946154    3816 provision.go:87] duration metric: took 637.8269ms to configureAuth
	I1205 06:34:47.946231    3816 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:34:47.946358    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:47.951931    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.990646    3816 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.990646    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:34:47.991641    3816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8371282s
	I1205 06:34:47.991641    3816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:34:48.007838    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.008431    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.008476    3816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:34:48.018898    3816 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.018898    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:34:48.018898    3816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.8643851s
	I1205 06:34:48.018898    3816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:34:48.061664    3816 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.062004    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:34:48.062141    3816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9076274s
	I1205 06:34:48.062141    3816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:34:48.062198    3816 cache.go:87] Successfully saved all images to host disk.
	I1205 06:34:48.196159    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:34:48.196159    3816 ubuntu.go:71] root file system type: overlay
	I1205 06:34:48.196159    3816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:34:48.200167    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.256431    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.257239    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.257347    3816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:34:48.462598    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:34:48.466014    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.522845    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.523383    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.523415    3816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:34:48.714113    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:48.714641    3816 machine.go:97] duration metric: took 3.444826s to provisionDockerMachine
	I1205 06:34:48.714700    3816 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:34:48.714747    3816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:34:48.721762    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:34:48.726053    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.800573    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:48.947188    3816 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:34:48.954494    3816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_ID="12"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:34:48.954494    3816 command_runner.go:130] > ID=debian
	I1205 06:34:48.954494    3816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:34:48.954494    3816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:34:48.955010    3816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:34:48.955099    3816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:34:48.955099    3816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:34:48.955806    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:34:48.955806    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /etc/ssl/certs/80362.pem
	I1205 06:34:48.956436    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:34:48.956436    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> /etc/test/nested/copy/8036/hosts
	I1205 06:34:48.960827    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:34:48.973199    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:34:49.002014    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:34:49.027943    3816 start.go:296] duration metric: took 313.2383ms for postStartSetup
	I1205 06:34:49.031806    3816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:34:49.035611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.090476    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.213008    3816 command_runner.go:130] > 1%
	I1205 06:34:49.217907    3816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:34:49.227048    3816 command_runner.go:130] > 950G
	I1205 06:34:49.227093    3816 fix.go:56] duration metric: took 4.0698775s for fixHost
	I1205 06:34:49.227184    3816 start.go:83] releasing machines lock for "functional-247800", held for 4.069942s
	I1205 06:34:49.230591    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:49.286648    3816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:34:49.290773    3816 ssh_runner.go:195] Run: cat /version.json
	I1205 06:34:49.290773    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.294768    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.346982    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.347419    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.463868    3816 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:34:49.468593    3816 ssh_runner.go:195] Run: systemctl --version
	I1205 06:34:49.473361    3816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1205 06:34:49.473361    3816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:34:49.482411    3816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:34:49.482411    3816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:34:49.486655    3816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:34:49.495075    3816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:34:49.495101    3816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:34:49.499557    3816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:34:49.512091    3816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:34:49.512091    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:49.512091    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:49.512091    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:49.534248    3816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:34:49.538479    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:34:49.557417    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:34:49.572725    3816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:34:49.577000    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1205 06:34:49.583562    3816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:34:49.583562    3816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:34:49.600012    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.618632    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:34:49.636357    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.654641    3816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:34:49.675114    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:34:49.696597    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:34:49.715167    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:34:49.738213    3816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:34:49.750303    3816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:34:49.754900    3816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:34:49.771255    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:49.909849    3816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:34:50.068262    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:50.068262    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:50.073308    3816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:34:50.092739    3816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1205 06:34:50.092785    3816 command_runner.go:130] > [Unit]
	I1205 06:34:50.092785    3816 command_runner.go:130] > Description=Docker Application Container Engine
	I1205 06:34:50.092785    3816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1205 06:34:50.092828    3816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1205 06:34:50.092828    3816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1205 06:34:50.092828    3816 command_runner.go:130] > Requires=docker.socket
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitBurst=3
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitIntervalSec=60
	I1205 06:34:50.092884    3816 command_runner.go:130] > [Service]
	I1205 06:34:50.092884    3816 command_runner.go:130] > Type=notify
	I1205 06:34:50.092884    3816 command_runner.go:130] > Restart=always
	I1205 06:34:50.092919    3816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1205 06:34:50.092943    3816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1205 06:34:50.092943    3816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1205 06:34:50.092943    3816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1205 06:34:50.092943    3816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1205 06:34:50.092943    3816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1205 06:34:50.092943    3816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1205 06:34:50.092943    3816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNOFILE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNPROC=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitCORE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1205 06:34:50.092943    3816 command_runner.go:130] > TasksMax=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > TimeoutStartSec=0
	I1205 06:34:50.092943    3816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1205 06:34:50.092943    3816 command_runner.go:130] > Delegate=yes
	I1205 06:34:50.092943    3816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1205 06:34:50.092943    3816 command_runner.go:130] > KillMode=process
	I1205 06:34:50.092943    3816 command_runner.go:130] > OOMScoreAdjust=-500
	I1205 06:34:50.092943    3816 command_runner.go:130] > [Install]
	I1205 06:34:50.092943    3816 command_runner.go:130] > WantedBy=multi-user.target
	I1205 06:34:50.097721    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.125496    3816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:34:50.186929    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.209805    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:34:50.227504    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:50.252330    3816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1205 06:34:50.256641    3816 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:34:50.264328    3816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1205 06:34:50.269234    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:34:50.282005    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:34:50.306573    3816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:34:50.447619    3816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:34:50.580607    3816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:34:50.581126    3816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:34:50.605071    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:34:50.630349    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:50.782135    3816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:34:51.643866    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:34:51.667031    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:34:51.689935    3816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 06:34:51.715903    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:51.740104    3816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:34:51.897148    3816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:34:52.038509    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.188129    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:34:52.216759    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:34:52.241711    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.388958    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:34:52.491038    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:52.508998    3816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:34:52.514460    3816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:34:52.523944    3816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1205 06:34:52.524474    3816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:34:52.524548    3816 command_runner.go:130] > Device: 0,112	Inode: 1756        Links: 1
	I1205 06:34:52.524589    3816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1205 06:34:52.524606    3816 command_runner.go:130] > Access: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524642    3816 command_runner.go:130] > Modify: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] > Change: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] >  Birth: -
	I1205 06:34:52.524737    3816 start.go:564] Will wait 60s for crictl version
	I1205 06:34:52.529361    3816 ssh_runner.go:195] Run: which crictl
	I1205 06:34:52.536028    3816 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:34:52.539850    3816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:34:52.581379    3816 command_runner.go:130] > Version:  0.1.0
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeName:  docker
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeVersion:  29.0.4
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:34:52.581379    3816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:34:52.585592    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.624737    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.628712    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.665154    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.668797    3816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:34:52.672375    3816 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:34:52.798876    3816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:34:52.801876    3816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:34:52.809731    3816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1205 06:34:52.813378    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:52.870537    3816 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:34:52.870721    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:52.873969    3816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:52.909019    3816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 06:34:52.909019    3816 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:34:52.909019    3816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:34:52.909019    3816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:34:52.913141    3816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:34:52.986014    3816 command_runner.go:130] > cgroupfs
	I1205 06:34:52.986014    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:52.986014    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:52.986014    3816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:34:52.986014    3816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:34:52.986014    3816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:34:52.990595    3816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubeadm
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubectl
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubelet
	I1205 06:34:53.003509    3816 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:34:53.008042    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:34:53.020762    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:34:53.041328    3816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:34:53.061676    3816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1205 06:34:53.085180    3816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:34:53.093591    3816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:34:53.098459    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:53.247095    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:53.952452    3816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:34:53.952558    3816 certs.go:195] generating shared ca certs ...
	I1205 06:34:53.952558    3816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:53.953085    3816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:34:53.953228    3816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:34:53.953228    3816 certs.go:257] generating profile certs ...
	I1205 06:34:53.954037    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:34:53.954334    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:34:53.954527    3816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:34:53.954527    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:34:53.954631    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:34:53.954814    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:34:53.954910    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:34:53.954973    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:34:53.955045    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:34:53.955116    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:34:53.955223    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:34:53.955290    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:34:53.955826    3816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:34:53.955954    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:34:53.956129    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:34:53.956912    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:34:53.957083    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem -> /usr/share/ca-certificates/8036.pem
	I1205 06:34:53.957119    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /usr/share/ca-certificates/80362.pem
	I1205 06:34:53.957269    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:53.958214    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:34:53.988313    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:34:54.013387    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:34:54.046063    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:34:54.077041    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:34:54.105745    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:34:54.131011    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:34:54.161212    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:34:54.186054    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:34:54.215522    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:34:54.241991    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:34:54.271902    3816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:34:54.296449    3816 ssh_runner.go:195] Run: openssl version
	I1205 06:34:54.306573    3816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:34:54.311042    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.336884    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:34:54.353148    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.366452    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.412489    3816 command_runner.go:130] > 3ec20f2e
	I1205 06:34:54.416608    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:34:54.434824    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.453553    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:34:54.472739    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481910    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481979    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.485785    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.529492    3816 command_runner.go:130] > b5213941
	I1205 06:34:54.534432    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:34:54.550655    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.568891    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:34:54.588631    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.607947    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.650843    3816 command_runner.go:130] > 51391683
	I1205 06:34:54.656334    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:34:54.673967    3816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.682495    3816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.683019    3816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:34:54.683019    3816 command_runner.go:130] > Device: 8,48	Inode: 15231       Links: 1
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: 2025-12-05 06:30:39.655512939 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Modify: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Change: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] >  Birth: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.687561    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:34:54.732319    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.737009    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:34:54.781446    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.785553    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:34:54.831869    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.837267    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:34:54.879433    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.883677    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:34:54.927800    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.932770    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:34:54.976702    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.977317    3816 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:54.981646    3816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:34:55.016824    3816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:34:55.029851    3816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:34:55.029954    3816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:34:55.029954    3816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:34:55.034067    3816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:34:55.049954    3816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:34:55.054431    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.105351    3816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-247800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.105351    3816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-247800" cluster setting kubeconfig missing "functional-247800" context setting]
	I1205 06:34:55.106335    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.121466    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.122042    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:34:55.123267    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:34:55.127724    3816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:34:55.143728    3816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 06:34:55.143728    3816 kubeadm.go:602] duration metric: took 113.7728ms to restartPrimaryControlPlane
	I1205 06:34:55.143728    3816 kubeadm.go:403] duration metric: took 166.4081ms to StartCluster
	I1205 06:34:55.143728    3816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.143728    3816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.145169    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.145829    3816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 06:34:55.145829    3816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:34:55.145829    3816 addons.go:70] Setting storage-provisioner=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:70] Setting default-storageclass=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:239] Setting addon storage-provisioner=true in "functional-247800"
	I1205 06:34:55.145829    3816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-247800"
	I1205 06:34:55.145829    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:55.145829    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.153665    3816 out.go:179] * Verifying Kubernetes components...
	I1205 06:34:55.154863    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.158249    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.163403    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:55.210939    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.211668    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.212897    3816 addons.go:239] Setting addon default-storageclass=true in "functional-247800"
	I1205 06:34:55.212990    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.213105    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.217433    3816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:55.222787    3816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.222787    3816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:34:55.224705    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.226041    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.278804    3816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.278804    3816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:34:55.278889    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.282998    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.334515    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.337518    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:55.430551    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.457611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.475848    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.517112    3816 node_ready.go:35] waiting up to 6m0s for node "functional-247800" to be "Ready" ...
	I1205 06:34:55.517112    3816 type.go:168] "Request Body" body=""
	I1205 06:34:55.517112    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:55.519131    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:55.528125    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.578790    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.578790    3816 retry.go:31] will retry after 337.958227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.602029    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.605442    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.605442    3816 retry.go:31] will retry after 279.867444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.890357    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.921657    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.969614    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.974371    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.974371    3816 retry.go:31] will retry after 509.000816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.006071    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.010642    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.010642    3816 retry.go:31] will retry after 471.064759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.487937    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:56.489162    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:56.520264    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:56.520264    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:56.523343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:56.575976    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 407.043808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 638.604661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.992080    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.065952    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.069179    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.069179    3816 retry.go:31] will retry after 488.646188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.223461    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.294874    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.299418    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.299514    3816 retry.go:31] will retry after 602.819042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.524155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:57.524155    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:57.527278    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:57.562706    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.639333    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.644388    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.644388    3816 retry.go:31] will retry after 1.399464773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.907870    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.981775    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.984813    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.984921    3816 retry.go:31] will retry after 1.652361939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:58.527501    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:58.527501    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:58.529897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:59.050453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:59.133420    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.139944    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.139944    3816 retry.go:31] will retry after 1.645340531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.530709    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:59.530709    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:59.534391    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:59.642381    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:59.718427    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.721834    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.721834    3816 retry.go:31] will retry after 2.46016532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.534639    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:00.534639    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:00.541150    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:35:00.790675    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:00.867216    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:00.867216    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.867216    3816 retry.go:31] will retry after 3.092416499s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:01.541435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:01.541435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:01.544716    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:02.187405    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:02.268020    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:02.273203    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.273203    3816 retry.go:31] will retry after 2.104673669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.544980    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:02.544980    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:02.548584    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:03.548839    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:03.548839    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:03.553516    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:03.966453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:04.049450    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.054065    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.054065    3816 retry.go:31] will retry after 2.461370012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.382944    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:04.458068    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.461488    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.461488    3816 retry.go:31] will retry after 4.66223575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.554680    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:04.555045    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:04.559246    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:05.559799    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:05.560272    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.563266    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:05.563380    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:05.563407    3816 type.go:168] "Request Body" body=""
	I1205 06:35:05.563407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.565659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.521322    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:06.565857    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:06.565857    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:06.569356    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.601193    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:06.606428    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:06.606428    3816 retry.go:31] will retry after 3.326595593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:07.570311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:07.570658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:07.572699    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:08.573282    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:08.573282    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:08.576531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:09.129039    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:09.217404    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:09.217937    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.217937    3816 retry.go:31] will retry after 6.891085945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.577333    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:09.577333    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:09.580146    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:09.938122    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:10.010022    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:10.013513    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.013513    3816 retry.go:31] will retry after 11.942280673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.581103    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:10.581488    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:10.585509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:11.586198    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:11.586569    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:11.589434    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:12.589851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:12.589851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:12.594400    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:13.595039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:13.595039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:13.598596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:14.599060    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:14.599060    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:14.601840    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:15.602885    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:15.602885    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.605878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:15.605878    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:15.605878    3816 type.go:168] "Request Body" body=""
	I1205 06:35:15.605878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.608593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:16.114246    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:16.191406    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:16.193997    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.193997    3816 retry.go:31] will retry after 14.066483079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.609000    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:16.609000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:16.611991    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:17.612458    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:17.612996    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:17.617813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:18.618806    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:18.618806    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:18.622265    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:19.623287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:19.623287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:19.627037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:20.627291    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:20.627658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:20.630318    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:21.630930    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:21.630930    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:21.635020    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:21.963392    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:22.044084    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:22.048902    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.048902    3816 retry.go:31] will retry after 11.169519715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.635453    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:22.635453    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:22.638251    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:23.639335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:23.639335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:23.642113    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:24.642790    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:24.642790    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:24.645713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:25.646115    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:25.646115    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.649594    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:25.649594    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:25.649594    3816 type.go:168] "Request Body" body=""
	I1205 06:35:25.649594    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.652081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:26.652283    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:26.652283    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:26.656196    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:27.656951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:27.656951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:27.660911    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:28.661511    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:28.661511    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:28.665811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:29.666123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:29.666562    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:29.669285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:30.265388    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:30.346699    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:30.350211    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.350747    3816 retry.go:31] will retry after 20.097178843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.669645    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:30.669645    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:30.673744    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:31.674027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:31.674411    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:31.676873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:32.677707    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:32.677707    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:32.680779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:33.224337    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:33.301595    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:33.304702    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.304702    3816 retry.go:31] will retry after 17.498614608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.681368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:33.681368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:33.685247    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:34.685570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:34.685570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:34.689019    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:35.689478    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:35.689478    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.693423    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:35.693478    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:35.693605    3816 type.go:168] "Request Body" body=""
	I1205 06:35:35.693728    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.697203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:36.697741    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:36.697741    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:36.700841    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:37.701712    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:37.701712    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:37.705613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:38.706497    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:38.706497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:38.709240    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:39.710263    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:39.710263    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:39.714262    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:40.714574    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:40.714574    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:40.717659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:41.717815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:41.717815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:41.720914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:42.722129    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:42.722129    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:42.725427    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:43.726728    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:43.727083    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:43.729850    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:44.730383    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:44.730383    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:44.733852    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:45.735220    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:45.735642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.738135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:45.738135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:45.738135    3816 type.go:168] "Request Body" body=""
	I1205 06:35:45.738135    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.740498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:46.740699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:46.740699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:46.744820    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:47.745629    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:47.746108    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:47.748477    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:48.749130    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:48.749130    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:48.752304    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:49.753459    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:49.753860    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:49.756462    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:50.453778    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:50.536078    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.536601    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.536601    3816 retry.go:31] will retry after 10.835620015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.756979    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:50.756979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:50.760402    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:50.808292    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:50.896096    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.901180    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.901180    3816 retry.go:31] will retry after 25.940426602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:51.761349    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:51.761349    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:51.763343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:35:52.765295    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:52.765295    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:52.768404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:53.769128    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:53.769490    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:53.773090    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:54.773373    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:54.773373    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:54.776047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:55.776319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:55.776319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.779826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:55.779933    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:55.780038    3816 type.go:168] "Request Body" body=""
	I1205 06:35:55.780038    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.782548    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:56.782984    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:56.782984    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:56.786482    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:57.787420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:57.787420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:57.791145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:58.791893    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:58.792215    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:58.795191    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:59.795792    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:59.795792    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:59.798496    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:00.799902    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:00.800226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:00.803690    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:01.377212    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:01.460054    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:01.465324    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.465324    3816 retry.go:31] will retry after 27.628572595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.803905    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:01.803905    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:01.806773    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:02.807252    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:02.807252    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:02.809866    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:03.810536    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:03.810536    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:03.813578    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:04.814042    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:04.814042    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:04.817276    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:05.818288    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:05.818679    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.821810    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:05.821891    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:05.821987    3816 type.go:168] "Request Body" body=""
	I1205 06:36:05.821987    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.824311    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:06.824568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:06.824568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:06.828662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:07.829627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:07.829627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:07.832420    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:08.833221    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:08.833221    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:08.837155    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:09.838074    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:09.838074    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:09.841184    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:10.842375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:10.842375    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:10.844946    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:11.846051    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:11.846051    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:11.849339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:12.849998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:12.850423    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:12.852739    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:13.853070    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:13.853070    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:13.856576    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:14.857697    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:14.857697    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:14.863183    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:15.864368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:15.864368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.868275    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:15.868370    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:15.868414    3816 type.go:168] "Request Body" body=""
	I1205 06:36:15.868524    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.870901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.847285    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:16.871649    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:16.871961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:16.873985    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.928128    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:16.933236    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:16.933236    3816 retry.go:31] will retry after 34.477637514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:17.875167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:17.875167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:17.879555    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:18.879691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:18.879691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:18.882703    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:19.883482    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:19.883482    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:19.886835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:20.887694    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:20.887694    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:20.890798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:21.891367    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:21.891367    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:21.894170    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:22.894555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:22.894555    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:22.898343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:23.898560    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:23.898560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:23.901633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:24.902026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:24.902026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:24.905116    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:25.905658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:25.905658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.908458    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:25.908570    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:25.908723    3816 type.go:168] "Request Body" body=""
	I1205 06:36:25.908723    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.911359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:26.911630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:26.911630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:26.915364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:27.916524    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:27.916824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:27.919661    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:28.920716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:28.920716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:28.923642    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:29.100195    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:29.179813    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.183920    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.184562    3816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:29.924461    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:29.924461    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:29.927800    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:30.928583    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:30.928583    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:30.931166    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:31.931918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:31.931918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:31.935633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:32.936157    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:32.936157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:32.939359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:33.939769    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:33.939769    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:33.943624    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:34.944004    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:34.944410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:34.946809    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:35.948067    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:35.948397    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.951285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:35.951285    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:35.951859    3816 type.go:168] "Request Body" body=""
	I1205 06:36:35.951913    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.956062    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:36.956335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:36.956335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:36.959382    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:37.959668    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:37.959668    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:37.962651    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:38.963737    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:38.963737    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:38.967065    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:39.967557    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:39.967557    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:39.970531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:40.970718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:40.970718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:40.974099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:41.974734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:41.975168    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:41.977669    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:42.977960    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:42.977960    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:42.981583    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:43.982240    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:43.982240    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:43.985849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:44.986627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:44.986627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:44.989945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:45.990505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:45.990505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.993980    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:45.994070    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:45.994133    3816 type.go:168] "Request Body" body=""
	I1205 06:36:45.994133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.996849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:46.997191    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:46.997191    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:47.002502    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:48.002840    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:48.003305    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:48.006657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:49.007253    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:49.007253    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:49.011209    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:50.011465    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:50.011889    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:50.014740    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:51.015805    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:51.015805    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:51.019618    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:51.417352    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:51.854034    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:51.865604    3816 out.go:179] * Enabled addons: 
	I1205 06:36:51.868880    3816 addons.go:530] duration metric: took 1m56.7213702s for enable addons: enabled=[]
	I1205 06:36:52.020718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:52.020718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:52.023235    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:53.023539    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:53.023927    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:53.026996    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:54.027998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:54.027998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:54.032187    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:55.032402    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:55.032402    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:55.036736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:56.037433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:56.037433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.040359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:56.040359    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:56.040359    3816 type.go:168] "Request Body" body=""
	I1205 06:36:56.040359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.043162    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:57.043498    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:57.043941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:57.046650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:58.047193    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:58.047742    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:58.050545    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:59.051297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:59.051297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:59.054095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:00.054646    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:00.054646    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:00.057943    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:01.058170    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:01.058170    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:01.061024    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:02.061200    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:02.061200    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:02.064035    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:03.065365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:03.065365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:03.068662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:04.069784    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:04.070189    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:04.072456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:05.073381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:05.073381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:05.076559    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:06.076793    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:06.076793    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.079598    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:06.079598    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:06.079598    3816 type.go:168] "Request Body" body=""
	I1205 06:37:06.079598    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.082197    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:07.082493    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:07.082493    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:07.085205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:08.086412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:08.086412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:08.089713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:09.090483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:09.090483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:09.093906    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:10.094287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:10.094287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:10.097613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:11.097803    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:11.097803    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:11.101190    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:12.101619    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:12.101619    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:12.104634    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:13.104688    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:13.104688    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:13.108075    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:14.108856    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:14.109198    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:14.113007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:15.113918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:15.113918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:15.116912    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:16.117830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:16.117830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.121438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:16.121438    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:16.121438    3816 type.go:168] "Request Body" body=""
	I1205 06:37:16.121438    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.124099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:17.124588    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:17.124588    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:17.128092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:18.128319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:18.128319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:18.132513    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:19.132736    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:19.132736    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:19.135560    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:20.136515    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:20.136515    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:20.139792    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:21.140167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:21.140471    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:21.143328    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:22.144039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:22.144039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:22.146593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:23.147175    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:23.147543    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:23.150087    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:24.150247    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:24.150247    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:24.154118    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:25.154433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:25.154433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:25.157386    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:26.157568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:26.157568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.160472    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:26.160472    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:26.160472    3816 type.go:168] "Request Body" body=""
	I1205 06:37:26.161000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.162649    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:27.163417    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:27.163417    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:27.167106    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:28.167812    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:28.167812    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:28.170974    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:29.171418    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:29.171418    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:29.174717    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:30.174973    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:30.174973    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:30.179281    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:31.179472    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:31.179472    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:31.182137    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:32.182463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:32.182463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:32.185914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:33.186359    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:33.186359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:33.189745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:34.190102    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:34.190102    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:34.193507    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:35.194094    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:35.194094    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:35.197205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:36.197770    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:36.197770    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.200498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:36.200498    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:36.201020    3816 type.go:168] "Request Body" body=""
	I1205 06:37:36.201099    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.203111    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:37.204025    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:37.204025    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:37.207133    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:38.207447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:38.207447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:38.210787    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:39.211776    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:39.211776    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:39.213772    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:40.214710    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:40.214710    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:40.217616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:41.217767    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:41.217767    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:41.221200    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:42.221683    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:42.222132    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:42.224721    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:43.224982    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:43.224982    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:43.229361    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:44.230310    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:44.230310    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:44.233109    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:45.234073    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:45.234345    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:45.238600    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:46.238845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:46.238845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.242060    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:46.242126    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:46.242126    3816 type.go:168] "Request Body" body=""
	I1205 06:37:46.242126    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.244330    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:47.245532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:47.245532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:47.248646    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:48.249492    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:48.249786    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:48.252034    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:49.252532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:49.252532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:49.255984    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:50.256278    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:50.256278    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:50.260022    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:51.260850    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:51.260850    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:51.262856    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:52.263771    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:52.263771    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:52.266969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:53.267499    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:53.267499    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:53.270917    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:54.271483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:54.271483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:54.273932    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:55.274677    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:55.274677    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:55.277978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:56.278630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:56.278630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.281414    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:56.281414    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:56.281414    3816 type.go:168] "Request Body" body=""
	I1205 06:37:56.281414    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.283686    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:57.283878    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:57.283878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:57.286826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:58.287091    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:58.287091    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:58.290488    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:59.291169    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:59.291169    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:59.293886    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:00.294704    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:00.294704    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:00.297861    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:01.298572    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:01.298961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:01.301760    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:02.302048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:02.302048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:02.304517    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:03.305251    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:03.305251    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:03.307969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:04.308898    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:04.308898    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:04.312237    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:05.313053    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:05.313395    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:05.316566    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:06.316866    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:06.316866    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.319941    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:06.319941    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:06.319941    3816 type.go:168] "Request Body" body=""
	I1205 06:38:06.319941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.322349    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:07.322907    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:07.322907    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:07.325564    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:08.326123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:08.326123    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:08.329670    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:09.330047    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:09.330047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:09.333169    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:10.333628    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:10.333628    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:10.336729    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:11.337447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:11.337447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:11.341026    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:12.342590    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:12.342590    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:12.345509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:13.345779    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:13.345779    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:13.348736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:14.349699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:14.349699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:14.354811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:38:15.355125    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:15.355699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:15.358657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:16.358925    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:16.358925    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.362294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:16.362394    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:16.362515    3816 type.go:168] "Request Body" body=""
	I1205 06:38:16.362576    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.366638    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:17.367505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:17.367505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:17.370390    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:18.371098    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:18.371098    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:18.374694    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:19.375813    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:19.375813    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:19.378371    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:20.378981    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:20.378981    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:20.382504    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:21.382666    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:21.382666    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:21.386056    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:22.386435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:22.386435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:22.389942    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:23.390201    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:23.390201    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:23.394201    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:24.394754    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:24.394754    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:24.399451    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:25.400206    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:25.400654    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:25.403432    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:26.404412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:26.404412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.407565    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:26.407565    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:26.407565    3816 type.go:168] "Request Body" body=""
	I1205 06:38:26.407565    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.410520    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:27.410783    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:27.410783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:27.413528    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:28.415022    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:28.415022    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:28.418437    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:29.419313    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:29.419313    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:29.422536    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:30.423342    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:30.423497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:30.426178    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:31.426933    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:31.426933    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:31.430144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:32.430929    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:32.430929    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:32.434479    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:33.434863    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:33.434863    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:33.437682    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:34.437924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:34.437924    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:34.440945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:35.442134    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:35.442134    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:35.444908    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:36.445071    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:36.445071    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.448284    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:36.448309    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:36.448309    3816 type.go:168] "Request Body" body=""
	I1205 06:38:36.448309    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.450897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:37.451653    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:37.451944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:37.455778    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:38.456494    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:38.456494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:38.459476    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:39.459817    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:39.460047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:39.462801    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:40.464111    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:40.464111    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:40.467438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:41.468570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:41.468570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:41.471499    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:42.471858    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:42.471858    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:42.475786    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:43.476207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:43.476207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:43.479798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:44.480584    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:44.480584    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:44.482596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:45.483834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:45.483834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:45.488465    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:46.488899    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:46.488899    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.492762    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:46.492857    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:46.493009    3816 type.go:168] "Request Body" body=""
	I1205 06:38:46.493069    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.495877    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:47.496162    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:47.496162    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:47.499015    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:48.499326    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:48.499326    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:48.503120    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:49.503509    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:49.503509    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:49.506339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:50.507027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:50.507403    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:50.509404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:51.510410    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:51.510410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:51.513676    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:52.514297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:52.514297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:52.517647    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:53.517908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:53.517908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:53.520862    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:54.521180    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:54.521180    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:54.524895    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:55.526048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:55.526048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:55.529345    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:56.529859    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:56.529859    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.532804    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:38:56.532932    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:56.533087    3816 type.go:168] "Request Body" body=""
	I1205 06:38:56.533133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.534781    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:38:57.535534    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:57.535534    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:57.538765    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:58.538928    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:58.538928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:58.542189    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:59.542538    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:59.542538    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:59.545041    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:00.545961    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:00.545961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:00.549272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:01.550020    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:01.550020    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:01.553982    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:02.554834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:02.554834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:02.557878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:03.558082    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:03.558082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:03.560631    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:04.561450    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:04.561450    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:04.564816    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:05.565884    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:05.565884    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:05.568807    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:06.569924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:06.570101    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.573050    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:06.573172    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:06.573295    3816 type.go:168] "Request Body" body=""
	I1205 06:39:06.573378    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.577668    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:07.578044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:07.578044    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:07.580203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:08.581555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:08.581760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:08.584347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:09.585050    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:09.585050    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:09.587469    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:10.588187    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:10.588187    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:10.592992    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:11.593285    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:11.593285    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:11.596552    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:12.597368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:12.597368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:12.599206    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:39:13.600760    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:13.600760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:13.604095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:14.604815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:14.604815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:14.607416    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:15.607824    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:15.607824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:15.611182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:16.612388    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:16.612388    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.615128    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:16.615128    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:16.615128    3816 type.go:168] "Request Body" body=""
	I1205 06:39:16.615128    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.617381    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:17.617837    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:17.617837    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:17.621309    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:18.622420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:18.622420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:18.625659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:19.626064    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:19.626064    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:19.630047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:20.631021    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:20.631425    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:20.634272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:21.634593    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:21.634593    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:21.637617    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:22.638437    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:22.638928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:22.642027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:23.643026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:23.643026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:23.646144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:24.646864    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:24.647232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:24.650759    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:25.651017    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:25.651017    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:25.654375    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:26.655043    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:26.655043    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.658286    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:39:26.658286    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:26.658286    3816 type.go:168] "Request Body" body=""
	I1205 06:39:26.658286    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.660775    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:27.661714    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:27.661714    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:27.667334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:39:28.667862    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:28.667862    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:28.672081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:29.672167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:29.672167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:29.674745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:30.676280    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:30.676280    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:30.679395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:31.679835    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:31.679835    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:31.682978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:32.684077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:32.684077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:32.686823    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:33.687836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:33.687836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:33.691156    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:34.691521    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:34.691521    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:34.693937    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:35.694845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:35.694845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:35.698294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:36.699532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:36.699532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.702195    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:36.702717    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:36.702862    3816 type.go:168] "Request Body" body=""
	I1205 06:39:36.702916    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.706473    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:37.707504    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:37.707504    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:37.710813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:38.710939    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:38.711535    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:38.716232    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:39.717207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:39.717207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:39.720152    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:40.720331    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:40.720331    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:40.722990    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:41.723691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:41.723691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:41.726966    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:42.727268    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:42.727268    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:42.731157    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:43.731449    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:43.731449    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:43.733873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:44.734365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:44.734365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:44.737250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:45.738219    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:45.738219    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:45.741606    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:46.742116    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:46.742448    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.744702    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:46.745230    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:46.745415    3816 type.go:168] "Request Body" body=""
	I1205 06:39:46.745518    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.747577    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:47.748110    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:47.748110    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:47.751287    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:48.751998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:48.751998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:48.755225    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:49.756362    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:49.756362    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:49.758876    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:50.759512    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:50.759512    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:50.762228    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:51.762926    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:51.762926    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:51.766327    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:52.766951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:52.766951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:52.770535    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:53.771298    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:53.771298    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:53.774215    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:54.774580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:54.774580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:54.777547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:55.778421    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:55.778421    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:55.781650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:56.782155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:56.783007    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.785844    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:56.785844    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:56.785844    3816 type.go:168] "Request Body" body=""
	I1205 06:39:56.785844    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.788526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:57.788851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:57.788851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:57.791811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:58.792393    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:58.792393    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:58.796105    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:59.796407    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:59.796407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:59.799250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:00.799796    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:00.799796    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:00.803018    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:01.803711    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:01.803711    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:01.806363    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:02.806549    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:02.806979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:02.810046    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:03.810372    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:03.810808    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:03.813835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:04.814104    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:04.814104    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:04.817217    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:05.817542    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:05.817985    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:05.820814    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:06.821479    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:06.821479    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.825616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:06.825616    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:06.825616    3816 type.go:168] "Request Body" body=""
	I1205 06:40:06.825616    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.828168    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:07.828495    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:07.828495    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:07.831826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:08.832009    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:08.832009    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:08.834677    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:09.834944    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:09.834944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:09.838182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:10.838841    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:10.838841    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:10.842122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:11.842336    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:11.842336    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:11.845418    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:12.846381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:12.846722    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:12.849321    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:13.849671    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:13.850100    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:13.852968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:14.853642    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:14.853642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:14.856503    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:15.856908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:15.856908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:15.861027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:16.862019    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:16.862328    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.864135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:40:16.864135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:16.864135    3816 type.go:168] "Request Body" body=""
	I1205 06:40:16.864652    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.866384    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:17.867632    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:17.867632    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:17.870561    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:18.871085    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:18.871085    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:18.874523    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:19.874746    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:19.874746    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:19.877529    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:20.878119    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:20.878119    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:20.881395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:21.881716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:21.881716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:21.884145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:22.884876    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:22.884876    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:22.887889    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:23.888341    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:23.888494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:23.891334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:24.891830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:24.891830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:24.895547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:25.896077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:25.896077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:25.898755    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:26.899940    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:26.899940    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.903829    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:40:26.903925    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:26.904028    3816 type.go:168] "Request Body" body=""
	I1205 06:40:26.904082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.907442    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:27.907744    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:27.907744    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:27.911092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:28.911316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:28.911316    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:28.914347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:29.914739    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:29.914739    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:29.918366    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:30.918822    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:30.918822    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:30.921456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:31.922028    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:31.922028    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:31.925069    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:32.925330    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:32.925330    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:32.928779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:33.929376    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:33.929376    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:33.933212    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:34.933571    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:34.933571    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:34.936160    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:35.937442    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:35.937442    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:35.941103    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:36.941232    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:36.941232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.943558    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:36.943558    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:36.943558    3816 type.go:168] "Request Body" body=""
	I1205 06:40:36.943558    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.946031    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:37.946448    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:37.946847    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:37.949586    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:38.949756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:38.950157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:38.952901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:39.953375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:39.953783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:39.956248    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:40.957703    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:40.957703    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:40.960899    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:41.961836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:41.961836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:41.965167    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:42.965316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:42.965560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:42.968007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:43.968734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:43.968734    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:43.971410    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:44.972311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:44.972311    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:44.975433    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:45.976381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:45.976381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:45.981080    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:46.981463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:46.981463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.986037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:46.986125    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:46.986226    3816 type.go:168] "Request Body" body=""
	I1205 06:40:46.986226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.989122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:47.989324    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:47.989324    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:47.992720    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:48.992852    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:48.992852    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:48.995205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:49.995580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:49.995580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:49.998526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:50.998794    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:50.998794    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:51.001637    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:52.002658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:52.002658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:52.004968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:53.005044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:53.005445    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:53.008445    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:54.009089    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:54.009089    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:54.012447    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:55.012756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:55.012756    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:55.015364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:55.523386    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 06:40:55.523386    3816 node_ready.go:38] duration metric: took 6m0.0010607s for node "functional-247800" to be "Ready" ...
	I1205 06:40:55.527309    3816 out.go:203] 
	W1205 06:40:55.529851    3816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:40:55.529851    3816 out.go:285] * 
	* 
	W1205 06:40:55.531579    3816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:40:55.533404    3816 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-247800 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m12.4234141s for "functional-247800" cluster.
I1205 06:40:56.344802    8036 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (671.483ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.568227s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-088800 image save kicbase/echo-server:functional-088800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image rm kicbase/echo-server:functional-088800 --alsologtostderr                                                                        │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ ssh            │ functional-088800 ssh sudo cat /etc/test/nested/copy/8036/hosts                                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image save --daemon kicbase/echo-server:functional-088800 --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ start          │ -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ start          │ -p functional-088800 --dry-run --alsologtostderr -v=1 --driver=docker                                                                                     │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ start          │ -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-088800 --alsologtostderr -v=1                                                                                            │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ service        │ functional-088800 service hello-node --url                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format short --alsologtostderr                                                                                               │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format yaml --alsologtostderr                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ ssh            │ functional-088800 ssh pgrep buildkitd                                                                                                                     │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ image          │ functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr                                                    │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format json --alsologtostderr                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format table --alsologtostderr                                                                                               │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ delete         │ -p functional-088800                                                                                                                                      │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:26 UTC │
	│ start          │ -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:26 UTC │                     │
	│ start          │ -p functional-247800 --alsologtostderr -v=8                                                                                                               │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:34 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:34:44
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:34:43.990318    3816 out.go:360] Setting OutFile to fd 932 ...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.034404    3816 out.go:374] Setting ErrFile to fd 1564...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.048005    3816 out.go:368] Setting JSON to false
	I1205 06:34:44.051134    3816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7741,"bootTime":1764908742,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:34:44.051134    3816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:34:44.054997    3816 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:34:44.057041    3816 notify.go:221] Checking for updates...
	I1205 06:34:44.057041    3816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:44.060615    3816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:34:44.063386    3816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:34:44.065338    3816 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:34:44.068100    3816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:34:44.070765    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:44.071546    3816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:34:44.185014    3816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:34:44.190117    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.434951    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.415349563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.438948    3816 out.go:179] * Using the docker driver based on existing profile
	I1205 06:34:44.442716    3816 start.go:309] selected driver: docker
	I1205 06:34:44.442716    3816 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.442716    3816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:34:44.449451    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.693650    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.673163701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.776708    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:44.776708    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:44.776708    3816 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.779353    3816 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:34:44.789396    3816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:34:44.793121    3816 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:34:44.794774    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:44.794774    3816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:34:44.844630    3816 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:44.871213    3816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:34:44.871213    3816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:34:45.153466    3816 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:45.154472    3816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:34:45.156762    3816 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:34:45.156819    3816 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:45.157157    3816 start.go:364] duration metric: took 122.3µs to acquireMachinesLock for "functional-247800"
	I1205 06:34:45.157157    3816 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:34:45.157157    3816 fix.go:54] fixHost starting: 
	I1205 06:34:45.165313    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:45.243648    3816 fix.go:112] recreateIfNeeded on functional-247800: state=Running err=<nil>
	W1205 06:34:45.243648    3816 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:34:45.267762    3816 out.go:252] * Updating the running docker "functional-247800" container ...
	I1205 06:34:45.269766    3816 machine.go:94] provisionDockerMachine start ...
	I1205 06:34:45.274766    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:45.449049    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:45.449049    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:45.449049    3816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:34:45.686505    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:45.686505    3816 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:34:45.691507    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:46.703091    3816 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800: (1.0115691s)
	I1205 06:34:46.706016    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:46.706016    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:46.706016    3816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:34:47.035712    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:47.042684    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.107199    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:47.107199    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:47.107199    3816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:34:47.308149    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:47.308197    3816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:34:47.308318    3816 ubuntu.go:190] setting up certificates
	I1205 06:34:47.308318    3816 provision.go:84] configureAuth start
	I1205 06:34:47.315253    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:47.380504    3816 provision.go:143] copyHostCerts
	I1205 06:34:47.381517    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:34:47.381517    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:34:47.382508    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:34:47.382508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:34:47.383507    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:34:47.384508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:34:47.385507    3816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:34:47.573727    3816 provision.go:177] copyRemoteCerts
	I1205 06:34:47.580429    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:34:47.585428    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.664000    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:47.815162    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1205 06:34:47.815801    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:34:47.849954    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1205 06:34:47.850956    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:34:47.876175    3816 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.876248    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:34:47.876248    3816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.7217371s
	I1205 06:34:47.876248    3816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:34:47.883801    3816 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.883881    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:34:47.883881    3816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.72937s
	I1205 06:34:47.883881    3816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:34:47.908586    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1205 06:34:47.909421    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:34:47.925048    3816 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.925345    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:34:47.925345    3816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.7708333s
	I1205 06:34:47.925345    3816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:34:47.926059    3816 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.926059    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:34:47.926059    3816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.7715471s
	I1205 06:34:47.926059    3816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:34:47.936781    3816 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.937442    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:34:47.937555    3816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.7830428s
	I1205 06:34:47.937609    3816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:34:47.946154    3816 provision.go:87] duration metric: took 637.8269ms to configureAuth
	I1205 06:34:47.946231    3816 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:34:47.946358    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:47.951931    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.990646    3816 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.990646    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:34:47.991641    3816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8371282s
	I1205 06:34:47.991641    3816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:34:48.007838    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.008431    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.008476    3816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:34:48.018898    3816 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.018898    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:34:48.018898    3816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.8643851s
	I1205 06:34:48.018898    3816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:34:48.061664    3816 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.062004    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:34:48.062141    3816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9076274s
	I1205 06:34:48.062141    3816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:34:48.062198    3816 cache.go:87] Successfully saved all images to host disk.
	I1205 06:34:48.196159    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:34:48.196159    3816 ubuntu.go:71] root file system type: overlay
	I1205 06:34:48.196159    3816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:34:48.200167    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.256431    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.257239    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.257347    3816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:34:48.462598    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:34:48.466014    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.522845    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.523383    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.523415    3816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:34:48.714113    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:48.714641    3816 machine.go:97] duration metric: took 3.444826s to provisionDockerMachine
	I1205 06:34:48.714700    3816 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:34:48.714747    3816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:34:48.721762    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:34:48.726053    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.800573    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:48.947188    3816 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:34:48.954494    3816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_ID="12"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:34:48.954494    3816 command_runner.go:130] > ID=debian
	I1205 06:34:48.954494    3816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:34:48.954494    3816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:34:48.955010    3816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:34:48.955099    3816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:34:48.955099    3816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:34:48.955806    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:34:48.955806    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /etc/ssl/certs/80362.pem
	I1205 06:34:48.956436    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:34:48.956436    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> /etc/test/nested/copy/8036/hosts
	I1205 06:34:48.960827    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:34:48.973199    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:34:49.002014    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:34:49.027943    3816 start.go:296] duration metric: took 313.2383ms for postStartSetup
	I1205 06:34:49.031806    3816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:34:49.035611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.090476    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.213008    3816 command_runner.go:130] > 1%
	I1205 06:34:49.217907    3816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:34:49.227048    3816 command_runner.go:130] > 950G
	I1205 06:34:49.227093    3816 fix.go:56] duration metric: took 4.0698775s for fixHost
	I1205 06:34:49.227184    3816 start.go:83] releasing machines lock for "functional-247800", held for 4.069942s
	I1205 06:34:49.230591    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:49.286648    3816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:34:49.290773    3816 ssh_runner.go:195] Run: cat /version.json
	I1205 06:34:49.290773    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.294768    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.346982    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.347419    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.463868    3816 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:34:49.468593    3816 ssh_runner.go:195] Run: systemctl --version
	I1205 06:34:49.473361    3816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1205 06:34:49.473361    3816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:34:49.482411    3816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:34:49.482411    3816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:34:49.486655    3816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:34:49.495075    3816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:34:49.495101    3816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:34:49.499557    3816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:34:49.512091    3816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:34:49.512091    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:49.512091    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:49.512091    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:49.534248    3816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:34:49.538479    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:34:49.557417    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:34:49.572725    3816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:34:49.577000    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1205 06:34:49.583562    3816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:34:49.583562    3816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:34:49.600012    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.618632    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:34:49.636357    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.654641    3816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:34:49.675114    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:34:49.696597    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:34:49.715167    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:34:49.738213    3816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:34:49.750303    3816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:34:49.754900    3816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:34:49.771255    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:49.909849    3816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:34:50.068262    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:50.068262    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:50.073308    3816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:34:50.092739    3816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1205 06:34:50.092785    3816 command_runner.go:130] > [Unit]
	I1205 06:34:50.092785    3816 command_runner.go:130] > Description=Docker Application Container Engine
	I1205 06:34:50.092785    3816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1205 06:34:50.092828    3816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1205 06:34:50.092828    3816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1205 06:34:50.092828    3816 command_runner.go:130] > Requires=docker.socket
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitBurst=3
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitIntervalSec=60
	I1205 06:34:50.092884    3816 command_runner.go:130] > [Service]
	I1205 06:34:50.092884    3816 command_runner.go:130] > Type=notify
	I1205 06:34:50.092884    3816 command_runner.go:130] > Restart=always
	I1205 06:34:50.092919    3816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1205 06:34:50.092943    3816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1205 06:34:50.092943    3816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1205 06:34:50.092943    3816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1205 06:34:50.092943    3816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1205 06:34:50.092943    3816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1205 06:34:50.092943    3816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1205 06:34:50.092943    3816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNOFILE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNPROC=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitCORE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1205 06:34:50.092943    3816 command_runner.go:130] > TasksMax=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > TimeoutStartSec=0
	I1205 06:34:50.092943    3816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1205 06:34:50.092943    3816 command_runner.go:130] > Delegate=yes
	I1205 06:34:50.092943    3816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1205 06:34:50.092943    3816 command_runner.go:130] > KillMode=process
	I1205 06:34:50.092943    3816 command_runner.go:130] > OOMScoreAdjust=-500
	I1205 06:34:50.092943    3816 command_runner.go:130] > [Install]
	I1205 06:34:50.092943    3816 command_runner.go:130] > WantedBy=multi-user.target
	I1205 06:34:50.097721    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.125496    3816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:34:50.186929    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.209805    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:34:50.227504    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:50.252330    3816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1205 06:34:50.256641    3816 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:34:50.264328    3816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1205 06:34:50.269234    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:34:50.282005    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:34:50.306573    3816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:34:50.447619    3816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:34:50.580607    3816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:34:50.581126    3816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:34:50.605071    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:34:50.630349    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:50.782135    3816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:34:51.643866    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:34:51.667031    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:34:51.689935    3816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 06:34:51.715903    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:51.740104    3816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:34:51.897148    3816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:34:52.038509    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.188129    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:34:52.216759    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:34:52.241711    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.388958    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:34:52.491038    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:52.508998    3816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:34:52.514460    3816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:34:52.523944    3816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1205 06:34:52.524474    3816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:34:52.524548    3816 command_runner.go:130] > Device: 0,112	Inode: 1756        Links: 1
	I1205 06:34:52.524589    3816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1205 06:34:52.524606    3816 command_runner.go:130] > Access: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524642    3816 command_runner.go:130] > Modify: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] > Change: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] >  Birth: -
	I1205 06:34:52.524737    3816 start.go:564] Will wait 60s for crictl version
	I1205 06:34:52.529361    3816 ssh_runner.go:195] Run: which crictl
	I1205 06:34:52.536028    3816 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:34:52.539850    3816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:34:52.581379    3816 command_runner.go:130] > Version:  0.1.0
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeName:  docker
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeVersion:  29.0.4
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:34:52.581379    3816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:34:52.585592    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.624737    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.628712    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.665154    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.668797    3816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:34:52.672375    3816 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:34:52.798876    3816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:34:52.801876    3816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:34:52.809731    3816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1205 06:34:52.813378    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:52.870537    3816 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:34:52.870721    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:52.873969    3816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:52.909019    3816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 06:34:52.909019    3816 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:34:52.909019    3816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:34:52.909019    3816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:34:52.913141    3816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:34:52.986014    3816 command_runner.go:130] > cgroupfs
	I1205 06:34:52.986014    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:52.986014    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:52.986014    3816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:34:52.986014    3816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:34:52.986014    3816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:34:52.990595    3816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubeadm
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubectl
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubelet
	I1205 06:34:53.003509    3816 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:34:53.008042    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:34:53.020762    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:34:53.041328    3816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:34:53.061676    3816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1205 06:34:53.085180    3816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:34:53.093591    3816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:34:53.098459    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:53.247095    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:53.952452    3816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:34:53.952558    3816 certs.go:195] generating shared ca certs ...
	I1205 06:34:53.952558    3816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:53.953085    3816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:34:53.953228    3816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:34:53.953228    3816 certs.go:257] generating profile certs ...
	I1205 06:34:53.954037    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:34:53.954334    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:34:53.954527    3816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:34:53.954527    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:34:53.954631    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:34:53.954814    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:34:53.954910    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:34:53.954973    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:34:53.955045    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:34:53.955116    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:34:53.955223    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:34:53.955290    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:34:53.955826    3816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:34:53.955954    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:34:53.956129    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:34:53.956912    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:34:53.957083    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem -> /usr/share/ca-certificates/8036.pem
	I1205 06:34:53.957119    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /usr/share/ca-certificates/80362.pem
	I1205 06:34:53.957269    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:53.958214    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:34:53.988313    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:34:54.013387    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:34:54.046063    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:34:54.077041    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:34:54.105745    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:34:54.131011    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:34:54.161212    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:34:54.186054    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:34:54.215522    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:34:54.241991    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:34:54.271902    3816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:34:54.296449    3816 ssh_runner.go:195] Run: openssl version
	I1205 06:34:54.306573    3816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:34:54.311042    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.336884    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:34:54.353148    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.366452    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.412489    3816 command_runner.go:130] > 3ec20f2e
	I1205 06:34:54.416608    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:34:54.434824    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.453553    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:34:54.472739    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481910    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481979    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.485785    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.529492    3816 command_runner.go:130] > b5213941
	I1205 06:34:54.534432    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:34:54.550655    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.568891    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:34:54.588631    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.607947    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.650843    3816 command_runner.go:130] > 51391683
	I1205 06:34:54.656334    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:34:54.673967    3816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.682495    3816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.683019    3816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:34:54.683019    3816 command_runner.go:130] > Device: 8,48	Inode: 15231       Links: 1
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: 2025-12-05 06:30:39.655512939 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Modify: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Change: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] >  Birth: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.687561    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:34:54.732319    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.737009    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:34:54.781446    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.785553    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:34:54.831869    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.837267    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:34:54.879433    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.883677    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:34:54.927800    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.932770    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:34:54.976702    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.977317    3816 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:54.981646    3816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:34:55.016824    3816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:34:55.029851    3816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:34:55.029954    3816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:34:55.029954    3816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:34:55.034067    3816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:34:55.049954    3816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:34:55.054431    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.105351    3816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-247800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.105351    3816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-247800" cluster setting kubeconfig missing "functional-247800" context setting]
	I1205 06:34:55.106335    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.121466    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.122042    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:34:55.123267    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:34:55.127724    3816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:34:55.143728    3816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 06:34:55.143728    3816 kubeadm.go:602] duration metric: took 113.7728ms to restartPrimaryControlPlane
	I1205 06:34:55.143728    3816 kubeadm.go:403] duration metric: took 166.4081ms to StartCluster
	I1205 06:34:55.143728    3816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.143728    3816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.145169    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.145829    3816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 06:34:55.145829    3816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:34:55.145829    3816 addons.go:70] Setting storage-provisioner=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:70] Setting default-storageclass=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:239] Setting addon storage-provisioner=true in "functional-247800"
	I1205 06:34:55.145829    3816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-247800"
	I1205 06:34:55.145829    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:55.145829    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.153665    3816 out.go:179] * Verifying Kubernetes components...
	I1205 06:34:55.154863    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.158249    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.163403    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:55.210939    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.211668    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.212897    3816 addons.go:239] Setting addon default-storageclass=true in "functional-247800"
	I1205 06:34:55.212990    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.213105    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.217433    3816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:55.222787    3816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.222787    3816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:34:55.224705    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.226041    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.278804    3816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.278804    3816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:34:55.278889    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.282998    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.334515    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.337518    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:55.430551    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.457611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.475848    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.517112    3816 node_ready.go:35] waiting up to 6m0s for node "functional-247800" to be "Ready" ...
	I1205 06:34:55.517112    3816 type.go:168] "Request Body" body=""
	I1205 06:34:55.517112    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:55.519131    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:55.528125    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.578790    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.578790    3816 retry.go:31] will retry after 337.958227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.602029    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.605442    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.605442    3816 retry.go:31] will retry after 279.867444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.890357    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.921657    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.969614    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.974371    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.974371    3816 retry.go:31] will retry after 509.000816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.006071    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.010642    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.010642    3816 retry.go:31] will retry after 471.064759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.487937    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:56.489162    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:56.520264    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:56.520264    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:56.523343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:56.575976    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 407.043808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 638.604661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.992080    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.065952    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.069179    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.069179    3816 retry.go:31] will retry after 488.646188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.223461    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.294874    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.299418    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.299514    3816 retry.go:31] will retry after 602.819042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.524155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:57.524155    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:57.527278    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:57.562706    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.639333    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.644388    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.644388    3816 retry.go:31] will retry after 1.399464773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.907870    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.981775    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.984813    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.984921    3816 retry.go:31] will retry after 1.652361939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:58.527501    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:58.527501    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:58.529897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:59.050453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:59.133420    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.139944    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.139944    3816 retry.go:31] will retry after 1.645340531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.530709    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:59.530709    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:59.534391    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:59.642381    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:59.718427    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.721834    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.721834    3816 retry.go:31] will retry after 2.46016532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.534639    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:00.534639    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:00.541150    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:35:00.790675    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:00.867216    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:00.867216    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.867216    3816 retry.go:31] will retry after 3.092416499s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:01.541435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:01.541435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:01.544716    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:02.187405    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:02.268020    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:02.273203    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.273203    3816 retry.go:31] will retry after 2.104673669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.544980    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:02.544980    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:02.548584    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:03.548839    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:03.548839    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:03.553516    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:03.966453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:04.049450    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.054065    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.054065    3816 retry.go:31] will retry after 2.461370012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.382944    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:04.458068    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.461488    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.461488    3816 retry.go:31] will retry after 4.66223575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.554680    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:04.555045    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:04.559246    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:05.559799    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:05.560272    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.563266    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:05.563380    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:05.563407    3816 type.go:168] "Request Body" body=""
	I1205 06:35:05.563407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.565659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.521322    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:06.565857    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:06.565857    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:06.569356    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.601193    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:06.606428    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:06.606428    3816 retry.go:31] will retry after 3.326595593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:07.570311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:07.570658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:07.572699    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:08.573282    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:08.573282    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:08.576531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:09.129039    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:09.217404    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:09.217937    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.217937    3816 retry.go:31] will retry after 6.891085945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.577333    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:09.577333    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:09.580146    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:09.938122    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:10.010022    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:10.013513    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.013513    3816 retry.go:31] will retry after 11.942280673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.581103    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:10.581488    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:10.585509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:11.586198    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:11.586569    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:11.589434    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:12.589851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:12.589851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:12.594400    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:13.595039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:13.595039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:13.598596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:14.599060    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:14.599060    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:14.601840    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:15.602885    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:15.602885    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.605878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:15.605878    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:15.605878    3816 type.go:168] "Request Body" body=""
	I1205 06:35:15.605878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.608593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:16.114246    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:16.191406    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:16.193997    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.193997    3816 retry.go:31] will retry after 14.066483079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.609000    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:16.609000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:16.611991    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:17.612458    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:17.612996    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:17.617813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:18.618806    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:18.618806    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:18.622265    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:19.623287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:19.623287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:19.627037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:20.627291    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:20.627658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:20.630318    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:21.630930    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:21.630930    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:21.635020    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:21.963392    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:22.044084    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:22.048902    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.048902    3816 retry.go:31] will retry after 11.169519715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.635453    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:22.635453    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:22.638251    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:23.639335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:23.639335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:23.642113    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:24.642790    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:24.642790    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:24.645713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:25.646115    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:25.646115    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.649594    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:25.649594    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:25.649594    3816 type.go:168] "Request Body" body=""
	I1205 06:35:25.649594    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.652081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:26.652283    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:26.652283    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:26.656196    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:27.656951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:27.656951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:27.660911    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:28.661511    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:28.661511    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:28.665811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:29.666123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:29.666562    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:29.669285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:30.265388    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:30.346699    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:30.350211    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.350747    3816 retry.go:31] will retry after 20.097178843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.669645    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:30.669645    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:30.673744    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:31.674027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:31.674411    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:31.676873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:32.677707    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:32.677707    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:32.680779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:33.224337    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:33.301595    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:33.304702    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.304702    3816 retry.go:31] will retry after 17.498614608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.681368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:33.681368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:33.685247    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:34.685570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:34.685570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:34.689019    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:35.689478    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:35.689478    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.693423    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:35.693478    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:35.693605    3816 type.go:168] "Request Body" body=""
	I1205 06:35:35.693728    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.697203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:36.697741    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:36.697741    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:36.700841    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:37.701712    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:37.701712    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:37.705613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:38.706497    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:38.706497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:38.709240    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:39.710263    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:39.710263    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:39.714262    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:40.714574    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:40.714574    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:40.717659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:41.717815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:41.717815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:41.720914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:42.722129    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:42.722129    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:42.725427    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:43.726728    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:43.727083    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:43.729850    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:44.730383    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:44.730383    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:44.733852    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:45.735220    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:45.735642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.738135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:45.738135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:45.738135    3816 type.go:168] "Request Body" body=""
	I1205 06:35:45.738135    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.740498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:46.740699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:46.740699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:46.744820    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:47.745629    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:47.746108    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:47.748477    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:48.749130    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:48.749130    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:48.752304    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:49.753459    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:49.753860    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:49.756462    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:50.453778    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:50.536078    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.536601    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.536601    3816 retry.go:31] will retry after 10.835620015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.756979    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:50.756979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:50.760402    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:50.808292    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:50.896096    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.901180    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.901180    3816 retry.go:31] will retry after 25.940426602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:51.761349    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:51.761349    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:51.763343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:35:52.765295    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:52.765295    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:52.768404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:53.769128    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:53.769490    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:53.773090    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:54.773373    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:54.773373    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:54.776047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:55.776319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:55.776319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.779826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:55.779933    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:55.780038    3816 type.go:168] "Request Body" body=""
	I1205 06:35:55.780038    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.782548    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:56.782984    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:56.782984    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:56.786482    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:57.787420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:57.787420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:57.791145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:58.791893    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:58.792215    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:58.795191    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:59.795792    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:59.795792    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:59.798496    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:00.799902    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:00.800226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:00.803690    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:01.377212    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:01.460054    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:01.465324    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.465324    3816 retry.go:31] will retry after 27.628572595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.803905    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:01.803905    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:01.806773    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:02.807252    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:02.807252    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:02.809866    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:03.810536    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:03.810536    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:03.813578    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:04.814042    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:04.814042    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:04.817276    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:05.818288    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:05.818679    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.821810    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:05.821891    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:05.821987    3816 type.go:168] "Request Body" body=""
	I1205 06:36:05.821987    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.824311    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:06.824568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:06.824568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:06.828662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:07.829627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:07.829627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:07.832420    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:08.833221    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:08.833221    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:08.837155    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:09.838074    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:09.838074    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:09.841184    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:10.842375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:10.842375    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:10.844946    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:11.846051    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:11.846051    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:11.849339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:12.849998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:12.850423    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:12.852739    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:13.853070    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:13.853070    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:13.856576    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:14.857697    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:14.857697    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:14.863183    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:15.864368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:15.864368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.868275    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:15.868370    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:15.868414    3816 type.go:168] "Request Body" body=""
	I1205 06:36:15.868524    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.870901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.847285    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:16.871649    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:16.871961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:16.873985    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.928128    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:16.933236    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:16.933236    3816 retry.go:31] will retry after 34.477637514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:17.875167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:17.875167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:17.879555    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:18.879691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:18.879691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:18.882703    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:19.883482    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:19.883482    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:19.886835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:20.887694    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:20.887694    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:20.890798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:21.891367    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:21.891367    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:21.894170    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:22.894555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:22.894555    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:22.898343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:23.898560    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:23.898560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:23.901633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:24.902026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:24.902026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:24.905116    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:25.905658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:25.905658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.908458    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:25.908570    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:25.908723    3816 type.go:168] "Request Body" body=""
	I1205 06:36:25.908723    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.911359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:26.911630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:26.911630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:26.915364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:27.916524    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:27.916824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:27.919661    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:28.920716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:28.920716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:28.923642    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:29.100195    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:29.179813    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.183920    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.184562    3816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:29.924461    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:29.924461    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:29.927800    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:30.928583    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:30.928583    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:30.931166    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:31.931918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:31.931918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:31.935633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:32.936157    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:32.936157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:32.939359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:33.939769    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:33.939769    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:33.943624    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:34.944004    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:34.944410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:34.946809    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:35.948067    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:35.948397    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.951285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:35.951285    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:35.951859    3816 type.go:168] "Request Body" body=""
	I1205 06:36:35.951913    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.956062    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:36.956335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:36.956335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:36.959382    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:37.959668    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:37.959668    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:37.962651    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:38.963737    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:38.963737    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:38.967065    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:39.967557    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:39.967557    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:39.970531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:40.970718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:40.970718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:40.974099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:41.974734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:41.975168    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:41.977669    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:42.977960    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:42.977960    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:42.981583    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:43.982240    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:43.982240    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:43.985849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:44.986627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:44.986627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:44.989945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:45.990505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:45.990505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.993980    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:45.994070    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:45.994133    3816 type.go:168] "Request Body" body=""
	I1205 06:36:45.994133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.996849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:46.997191    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:46.997191    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:47.002502    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:48.002840    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:48.003305    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:48.006657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:49.007253    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:49.007253    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:49.011209    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:50.011465    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:50.011889    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:50.014740    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:51.015805    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:51.015805    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:51.019618    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:51.417352    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:51.854034    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:51.865604    3816 out.go:179] * Enabled addons: 
	I1205 06:36:51.868880    3816 addons.go:530] duration metric: took 1m56.7213702s for enable addons: enabled=[]
	I1205 06:36:52.020718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:52.020718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:52.023235    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:53.023539    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:53.023927    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:53.026996    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:54.027998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:54.027998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:54.032187    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:55.032402    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:55.032402    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:55.036736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:56.037433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:56.037433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.040359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:56.040359    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:56.040359    3816 type.go:168] "Request Body" body=""
	I1205 06:36:56.040359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.043162    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:57.043498    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:57.043941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:57.046650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:58.047193    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:58.047742    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:58.050545    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:59.051297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:59.051297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:59.054095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:00.054646    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:00.054646    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:00.057943    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:01.058170    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:01.058170    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:01.061024    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:02.061200    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:02.061200    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:02.064035    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:03.065365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:03.065365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:03.068662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:04.069784    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:04.070189    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:04.072456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:05.073381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:05.073381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:05.076559    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:06.076793    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:06.076793    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.079598    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:06.079598    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:06.079598    3816 type.go:168] "Request Body" body=""
	I1205 06:37:06.079598    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.082197    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:07.082493    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:07.082493    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:07.085205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:08.086412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:08.086412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:08.089713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:09.090483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:09.090483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:09.093906    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:10.094287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:10.094287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:10.097613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:11.097803    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:11.097803    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:11.101190    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:12.101619    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:12.101619    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:12.104634    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:13.104688    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:13.104688    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:13.108075    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:14.108856    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:14.109198    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:14.113007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:15.113918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:15.113918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:15.116912    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:16.117830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:16.117830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.121438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:16.121438    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:16.121438    3816 type.go:168] "Request Body" body=""
	I1205 06:37:16.121438    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.124099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:17.124588    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:17.124588    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:17.128092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:18.128319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:18.128319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:18.132513    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:19.132736    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:19.132736    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:19.135560    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:20.136515    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:20.136515    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:20.139792    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:21.140167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:21.140471    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:21.143328    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:22.144039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:22.144039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:22.146593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:23.147175    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:23.147543    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:23.150087    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:24.150247    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:24.150247    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:24.154118    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:25.154433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:25.154433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:25.157386    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:26.157568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:26.157568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.160472    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:26.160472    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:26.160472    3816 type.go:168] "Request Body" body=""
	I1205 06:37:26.161000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.162649    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:27.163417    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:27.163417    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:27.167106    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:28.167812    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:28.167812    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:28.170974    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:29.171418    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:29.171418    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:29.174717    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:30.174973    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:30.174973    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:30.179281    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:31.179472    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:31.179472    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:31.182137    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:32.182463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:32.182463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:32.185914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:33.186359    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:33.186359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:33.189745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:34.190102    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:34.190102    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:34.193507    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:35.194094    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:35.194094    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:35.197205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:36.197770    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:36.197770    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.200498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:36.200498    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:36.201020    3816 type.go:168] "Request Body" body=""
	I1205 06:37:36.201099    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.203111    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:37.204025    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:37.204025    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:37.207133    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:38.207447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:38.207447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:38.210787    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:39.211776    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:39.211776    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:39.213772    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:40.214710    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:40.214710    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:40.217616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:41.217767    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:41.217767    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:41.221200    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:42.221683    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:42.222132    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:42.224721    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:43.224982    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:43.224982    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:43.229361    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:44.230310    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:44.230310    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:44.233109    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:45.234073    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:45.234345    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:45.238600    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:46.238845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:46.238845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.242060    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:46.242126    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:46.242126    3816 type.go:168] "Request Body" body=""
	I1205 06:37:46.242126    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.244330    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:47.245532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:47.245532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:47.248646    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:48.249492    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:48.249786    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:48.252034    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:49.252532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:49.252532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:49.255984    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:50.256278    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:50.256278    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:50.260022    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:51.260850    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:51.260850    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:51.262856    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:52.263771    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:52.263771    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:52.266969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:53.267499    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:53.267499    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:53.270917    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:54.271483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:54.271483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:54.273932    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:55.274677    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:55.274677    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:55.277978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:56.278630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:56.278630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.281414    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:56.281414    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:56.281414    3816 type.go:168] "Request Body" body=""
	I1205 06:37:56.281414    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.283686    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:57.283878    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:57.283878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:57.286826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:58.287091    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:58.287091    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:58.290488    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:59.291169    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:59.291169    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:59.293886    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:00.294704    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:00.294704    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:00.297861    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:01.298572    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:01.298961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:01.301760    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:02.302048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:02.302048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:02.304517    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:03.305251    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:03.305251    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:03.307969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:04.308898    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:04.308898    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:04.312237    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:05.313053    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:05.313395    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:05.316566    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:06.316866    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:06.316866    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.319941    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:06.319941    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:06.319941    3816 type.go:168] "Request Body" body=""
	I1205 06:38:06.319941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.322349    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:07.322907    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:07.322907    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:07.325564    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:08.326123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:08.326123    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:08.329670    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:09.330047    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:09.330047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:09.333169    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:10.333628    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:10.333628    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:10.336729    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:11.337447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:11.337447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:11.341026    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:12.342590    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:12.342590    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:12.345509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:13.345779    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:13.345779    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:13.348736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:14.349699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:14.349699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:14.354811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:38:15.355125    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:15.355699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:15.358657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:16.358925    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:16.358925    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.362294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:16.362394    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:16.362515    3816 type.go:168] "Request Body" body=""
	I1205 06:38:16.362576    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.366638    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:17.367505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:17.367505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:17.370390    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:18.371098    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:18.371098    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:18.374694    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:19.375813    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:19.375813    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:19.378371    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:20.378981    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:20.378981    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:20.382504    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:21.382666    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:21.382666    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:21.386056    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:22.386435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:22.386435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:22.389942    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:23.390201    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:23.390201    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:23.394201    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:24.394754    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:24.394754    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:24.399451    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:25.400206    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:25.400654    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:25.403432    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:26.404412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:26.404412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.407565    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:26.407565    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:26.407565    3816 type.go:168] "Request Body" body=""
	I1205 06:38:26.407565    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.410520    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:27.410783    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:27.410783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:27.413528    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:28.415022    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:28.415022    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:28.418437    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:29.419313    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:29.419313    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:29.422536    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:30.423342    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:30.423497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:30.426178    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:31.426933    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:31.426933    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:31.430144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:32.430929    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:32.430929    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:32.434479    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:33.434863    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:33.434863    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:33.437682    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:34.437924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:34.437924    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:34.440945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:35.442134    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:35.442134    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:35.444908    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:36.445071    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:36.445071    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.448284    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:36.448309    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:36.448309    3816 type.go:168] "Request Body" body=""
	I1205 06:38:36.448309    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.450897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:37.451653    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:37.451944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:37.455778    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:38.456494    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:38.456494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:38.459476    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:39.459817    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:39.460047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:39.462801    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:40.464111    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:40.464111    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:40.467438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:41.468570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:41.468570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:41.471499    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:42.471858    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:42.471858    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:42.475786    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:43.476207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:43.476207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:43.479798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:44.480584    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:44.480584    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:44.482596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:45.483834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:45.483834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:45.488465    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:46.488899    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:46.488899    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.492762    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:46.492857    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:46.493009    3816 type.go:168] "Request Body" body=""
	I1205 06:38:46.493069    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.495877    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:47.496162    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:47.496162    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:47.499015    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:48.499326    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:48.499326    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:48.503120    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:49.503509    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:49.503509    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:49.506339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:50.507027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:50.507403    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:50.509404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:51.510410    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:51.510410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:51.513676    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:52.514297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:52.514297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:52.517647    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:53.517908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:53.517908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:53.520862    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:54.521180    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:54.521180    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:54.524895    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:55.526048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:55.526048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:55.529345    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:56.529859    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:56.529859    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.532804    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:38:56.532932    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:56.533087    3816 type.go:168] "Request Body" body=""
	I1205 06:38:56.533133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.534781    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:38:57.535534    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:57.535534    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:57.538765    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:58.538928    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:58.538928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:58.542189    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:59.542538    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:59.542538    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:59.545041    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:00.545961    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:00.545961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:00.549272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:01.550020    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:01.550020    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:01.553982    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:02.554834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:02.554834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:02.557878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:03.558082    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:03.558082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:03.560631    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:04.561450    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:04.561450    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:04.564816    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:05.565884    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:05.565884    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:05.568807    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:06.569924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:06.570101    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.573050    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:06.573172    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:06.573295    3816 type.go:168] "Request Body" body=""
	I1205 06:39:06.573378    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.577668    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:07.578044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:07.578044    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:07.580203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:08.581555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:08.581760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:08.584347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:09.585050    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:09.585050    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:09.587469    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:10.588187    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:10.588187    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:10.592992    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:11.593285    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:11.593285    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:11.596552    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:12.597368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:12.597368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:12.599206    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:39:13.600760    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:13.600760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:13.604095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:14.604815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:14.604815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:14.607416    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:15.607824    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:15.607824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:15.611182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:16.612388    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:16.612388    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.615128    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:16.615128    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:16.615128    3816 type.go:168] "Request Body" body=""
	I1205 06:39:16.615128    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.617381    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:17.617837    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:17.617837    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:17.621309    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:18.622420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:18.622420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:18.625659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:19.626064    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:19.626064    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:19.630047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:20.631021    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:20.631425    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:20.634272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:21.634593    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:21.634593    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:21.637617    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:22.638437    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:22.638928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:22.642027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:23.643026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:23.643026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:23.646144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:24.646864    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:24.647232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:24.650759    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:25.651017    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:25.651017    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:25.654375    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:26.655043    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:26.655043    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.658286    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:39:26.658286    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:26.658286    3816 type.go:168] "Request Body" body=""
	I1205 06:39:26.658286    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.660775    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:27.661714    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:27.661714    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:27.667334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:39:28.667862    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:28.667862    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:28.672081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:29.672167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:29.672167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:29.674745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:30.676280    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:30.676280    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:30.679395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:31.679835    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:31.679835    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:31.682978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:32.684077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:32.684077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:32.686823    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:33.687836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:33.687836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:33.691156    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:34.691521    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:34.691521    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:34.693937    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:35.694845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:35.694845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:35.698294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:36.699532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:36.699532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.702195    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:36.702717    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:36.702862    3816 type.go:168] "Request Body" body=""
	I1205 06:39:36.702916    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.706473    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:37.707504    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:37.707504    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:37.710813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:38.710939    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:38.711535    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:38.716232    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:39.717207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:39.717207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:39.720152    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:40.720331    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:40.720331    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:40.722990    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:41.723691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:41.723691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:41.726966    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:42.727268    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:42.727268    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:42.731157    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:43.731449    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:43.731449    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:43.733873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:44.734365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:44.734365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:44.737250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:45.738219    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:45.738219    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:45.741606    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:46.742116    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:46.742448    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.744702    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:46.745230    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:46.745415    3816 type.go:168] "Request Body" body=""
	I1205 06:39:46.745518    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.747577    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:47.748110    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:47.748110    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:47.751287    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:48.751998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:48.751998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:48.755225    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:49.756362    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:49.756362    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:49.758876    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:50.759512    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:50.759512    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:50.762228    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:51.762926    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:51.762926    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:51.766327    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:52.766951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:52.766951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:52.770535    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:53.771298    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:53.771298    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:53.774215    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:54.774580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:54.774580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:54.777547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:55.778421    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:55.778421    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:55.781650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:56.782155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:56.783007    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.785844    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:56.785844    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:56.785844    3816 type.go:168] "Request Body" body=""
	I1205 06:39:56.785844    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.788526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:57.788851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:57.788851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:57.791811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:58.792393    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:58.792393    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:58.796105    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:59.796407    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:59.796407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:59.799250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:00.799796    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:00.799796    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:00.803018    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:01.803711    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:01.803711    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:01.806363    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:02.806549    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:02.806979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:02.810046    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:03.810372    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:03.810808    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:03.813835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:04.814104    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:04.814104    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:04.817217    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:05.817542    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:05.817985    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:05.820814    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:06.821479    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:06.821479    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.825616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:06.825616    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:06.825616    3816 type.go:168] "Request Body" body=""
	I1205 06:40:06.825616    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.828168    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:07.828495    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:07.828495    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:07.831826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:08.832009    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:08.832009    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:08.834677    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:09.834944    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:09.834944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:09.838182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:10.838841    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:10.838841    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:10.842122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:11.842336    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:11.842336    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:11.845418    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:12.846381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:12.846722    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:12.849321    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:13.849671    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:13.850100    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:13.852968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:14.853642    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:14.853642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:14.856503    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:15.856908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:15.856908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:15.861027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:16.862019    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:16.862328    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.864135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:40:16.864135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:16.864135    3816 type.go:168] "Request Body" body=""
	I1205 06:40:16.864652    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.866384    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:17.867632    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:17.867632    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:17.870561    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:18.871085    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:18.871085    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:18.874523    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:19.874746    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:19.874746    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:19.877529    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:20.878119    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:20.878119    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:20.881395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:21.881716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:21.881716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:21.884145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:22.884876    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:22.884876    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:22.887889    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:23.888341    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:23.888494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:23.891334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:24.891830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:24.891830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:24.895547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:25.896077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:25.896077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:25.898755    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:26.899940    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:26.899940    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.903829    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:40:26.903925    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:26.904028    3816 type.go:168] "Request Body" body=""
	I1205 06:40:26.904082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.907442    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:27.907744    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:27.907744    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:27.911092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:28.911316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:28.911316    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:28.914347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:29.914739    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:29.914739    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:29.918366    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:30.918822    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:30.918822    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:30.921456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:31.922028    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:31.922028    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:31.925069    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:32.925330    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:32.925330    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:32.928779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:33.929376    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:33.929376    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:33.933212    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:34.933571    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:34.933571    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:34.936160    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:35.937442    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:35.937442    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:35.941103    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:36.941232    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:36.941232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.943558    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:36.943558    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:36.943558    3816 type.go:168] "Request Body" body=""
	I1205 06:40:36.943558    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.946031    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:37.946448    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:37.946847    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:37.949586    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:38.949756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:38.950157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:38.952901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:39.953375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:39.953783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:39.956248    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:40.957703    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:40.957703    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:40.960899    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:41.961836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:41.961836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:41.965167    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:42.965316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:42.965560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:42.968007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:43.968734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:43.968734    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:43.971410    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:44.972311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:44.972311    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:44.975433    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:45.976381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:45.976381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:45.981080    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:46.981463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:46.981463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.986037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:46.986125    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:46.986226    3816 type.go:168] "Request Body" body=""
	I1205 06:40:46.986226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.989122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:47.989324    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:47.989324    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:47.992720    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:48.992852    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:48.992852    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:48.995205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:49.995580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:49.995580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:49.998526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:50.998794    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:50.998794    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:51.001637    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:52.002658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:52.002658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:52.004968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:53.005044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:53.005445    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:53.008445    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:54.009089    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:54.009089    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:54.012447    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:55.012756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:55.012756    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:55.015364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:55.523386    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 06:40:55.523386    3816 node_ready.go:38] duration metric: took 6m0.0010607s for node "functional-247800" to be "Ready" ...
	I1205 06:40:55.527309    3816 out.go:203] 
	W1205 06:40:55.529851    3816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:40:55.529851    3816 out.go:285] * 
	W1205 06:40:55.531579    3816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:40:55.533404    3816 out.go:203] 
	
	
	==> Docker <==
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.520999227Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521005327Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521028530Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521065534Z" level=info msg="Initializing buildkit"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.631468044Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636567622Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725240Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636825651Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725440Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:34:51 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:51 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:34:52 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:34:52 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:40:58.519799   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:40:58.521081   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:40:58.522199   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:40:58.523184   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:40:58.526837   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001158] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001030] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001035] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000969] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000975] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:34] CPU: 4 PID: 56451 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000864] RIP: 0033:0x7f46c5e3eb20
	[  +0.000406] Code: Unable to access opcode bytes at RIP 0x7f46c5e3eaf6.
	[  +0.000950] RSP: 002b:00007fff1eb3d7e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001108] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001199] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000983] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000845] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000799] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000884] FS:  0000000000000000 GS:  0000000000000000
	[  +0.829311] CPU: 0 PID: 56573 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000781] RIP: 0033:0x7f241df52b20
	[  +0.000533] Code: Unable to access opcode bytes at RIP 0x7f241df52af6.
	[  +0.000663] RSP: 002b:00007ffded7fa4e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000781] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:40:58 up  2:14,  0 user,  load average: 0.22, 0.34, 0.61
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:40:54 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:40:55 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 819.
	Dec 05 06:40:55 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:55 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:55 functional-247800 kubelet[17702]: E1205 06:40:55.740189   17702 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:40:55 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:40:55 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:40:56 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 820.
	Dec 05 06:40:56 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:56 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:56 functional-247800 kubelet[17715]: E1205 06:40:56.512622   17715 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:40:56 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:40:56 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:40:57 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 821.
	Dec 05 06:40:57 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:57 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:57 functional-247800 kubelet[17743]: E1205 06:40:57.247294   17743 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:40:57 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:40:57 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:40:57 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 822.
	Dec 05 06:40:57 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:57 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:40:57 functional-247800 kubelet[17768]: E1205 06:40:57.996478   17768 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:40:58 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:40:58 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (611.3744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (376.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (54.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-247800 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-247800 get po -A: exit status 1 (50.3796467s)

                                                
                                                
** stderr ** 
	E1205 06:41:10.468394    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:41:20.506476    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:41:30.546356    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:41:40.584338    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:41:50.627320    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-247800 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1205 06:41:10.468394    5844 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55398/api?timeout=32s\\\": EOF\"\nE1205 06:41:20.506476    5844 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55398/api?timeout=32s\\\": EOF\"\nE1205 06:41:30.546356    5844 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55398/api?timeout=32s\\\": EOF\"\nE1205 06:41:40.584338    5844 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55398/api?timeout=32s\\\": EOF\"\nE1205 06:41:50.627320    5844 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:55398/api?timeout=32s\\\": EOF\"\nUnable to connect to the server: EOF\n"*: args "kubectl --context functio
nal-247800 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-247800 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (690.3728ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.5763652s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-088800 image save kicbase/echo-server:functional-088800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image rm kicbase/echo-server:functional-088800 --alsologtostderr                                                                        │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ ssh            │ functional-088800 ssh sudo cat /etc/test/nested/copy/8036/hosts                                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ image          │ functional-088800 image save --daemon kicbase/echo-server:functional-088800 --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ start          │ -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ start          │ -p functional-088800 --dry-run --alsologtostderr -v=1 --driver=docker                                                                                     │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ start          │ -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker                                                                           │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-088800 --alsologtostderr -v=1                                                                                            │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ service        │ functional-088800 service hello-node --url                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ update-context │ functional-088800 update-context --alsologtostderr -v=2                                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format short --alsologtostderr                                                                                               │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format yaml --alsologtostderr                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ ssh            │ functional-088800 ssh pgrep buildkitd                                                                                                                     │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ image          │ functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr                                                    │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls                                                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format json --alsologtostderr                                                                                                │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image          │ functional-088800 image ls --format table --alsologtostderr                                                                                               │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ delete         │ -p functional-088800                                                                                                                                      │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:26 UTC │
	│ start          │ -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:26 UTC │                     │
	│ start          │ -p functional-247800 --alsologtostderr -v=8                                                                                                               │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:34 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:34:44
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:34:43.990318    3816 out.go:360] Setting OutFile to fd 932 ...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.034404    3816 out.go:374] Setting ErrFile to fd 1564...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.048005    3816 out.go:368] Setting JSON to false
	I1205 06:34:44.051134    3816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7741,"bootTime":1764908742,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:34:44.051134    3816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:34:44.054997    3816 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:34:44.057041    3816 notify.go:221] Checking for updates...
	I1205 06:34:44.057041    3816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:44.060615    3816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:34:44.063386    3816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:34:44.065338    3816 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:34:44.068100    3816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:34:44.070765    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:44.071546    3816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:34:44.185014    3816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:34:44.190117    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.434951    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.415349563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.438948    3816 out.go:179] * Using the docker driver based on existing profile
	I1205 06:34:44.442716    3816 start.go:309] selected driver: docker
	I1205 06:34:44.442716    3816 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.442716    3816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:34:44.449451    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.693650    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.673163701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.776708    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:44.776708    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:44.776708    3816 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.779353    3816 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:34:44.789396    3816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:34:44.793121    3816 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:34:44.794774    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:44.794774    3816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:34:44.844630    3816 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:44.871213    3816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:34:44.871213    3816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:34:45.153466    3816 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:45.154472    3816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:34:45.156762    3816 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:34:45.156819    3816 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:45.157157    3816 start.go:364] duration metric: took 122.3µs to acquireMachinesLock for "functional-247800"
	I1205 06:34:45.157157    3816 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:34:45.157157    3816 fix.go:54] fixHost starting: 
	I1205 06:34:45.165313    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:45.243648    3816 fix.go:112] recreateIfNeeded on functional-247800: state=Running err=<nil>
	W1205 06:34:45.243648    3816 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:34:45.267762    3816 out.go:252] * Updating the running docker "functional-247800" container ...
	I1205 06:34:45.269766    3816 machine.go:94] provisionDockerMachine start ...
	I1205 06:34:45.274766    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:45.449049    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:45.449049    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:45.449049    3816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:34:45.686505    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:45.686505    3816 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:34:45.691507    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:46.703091    3816 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800: (1.0115691s)
	I1205 06:34:46.706016    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:46.706016    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:46.706016    3816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:34:47.035712    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:47.042684    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.107199    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:47.107199    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:47.107199    3816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:34:47.308149    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:47.308197    3816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:34:47.308318    3816 ubuntu.go:190] setting up certificates
	I1205 06:34:47.308318    3816 provision.go:84] configureAuth start
	I1205 06:34:47.315253    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:47.380504    3816 provision.go:143] copyHostCerts
	I1205 06:34:47.381517    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:34:47.381517    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:34:47.382508    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:34:47.382508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:34:47.383507    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:34:47.384508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:34:47.385507    3816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:34:47.573727    3816 provision.go:177] copyRemoteCerts
	I1205 06:34:47.580429    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:34:47.585428    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.664000    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:47.815162    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1205 06:34:47.815801    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:34:47.849954    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1205 06:34:47.850956    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:34:47.876175    3816 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.876248    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:34:47.876248    3816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.7217371s
	I1205 06:34:47.876248    3816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:34:47.883801    3816 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.883881    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:34:47.883881    3816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.72937s
	I1205 06:34:47.883881    3816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:34:47.908586    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1205 06:34:47.909421    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:34:47.925048    3816 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.925345    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:34:47.925345    3816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.7708333s
	I1205 06:34:47.925345    3816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:34:47.926059    3816 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.926059    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:34:47.926059    3816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.7715471s
	I1205 06:34:47.926059    3816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:34:47.936781    3816 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.937442    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:34:47.937555    3816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.7830428s
	I1205 06:34:47.937609    3816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:34:47.946154    3816 provision.go:87] duration metric: took 637.8269ms to configureAuth
	I1205 06:34:47.946231    3816 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:34:47.946358    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:47.951931    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.990646    3816 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.990646    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:34:47.991641    3816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8371282s
	I1205 06:34:47.991641    3816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:34:48.007838    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.008431    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.008476    3816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:34:48.018898    3816 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.018898    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:34:48.018898    3816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.8643851s
	I1205 06:34:48.018898    3816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:34:48.061664    3816 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.062004    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:34:48.062141    3816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9076274s
	I1205 06:34:48.062141    3816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:34:48.062198    3816 cache.go:87] Successfully saved all images to host disk.
	I1205 06:34:48.196159    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:34:48.196159    3816 ubuntu.go:71] root file system type: overlay
	I1205 06:34:48.196159    3816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:34:48.200167    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.256431    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.257239    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.257347    3816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:34:48.462598    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:34:48.466014    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.522845    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.523383    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.523415    3816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:34:48.714113    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:48.714641    3816 machine.go:97] duration metric: took 3.444826s to provisionDockerMachine
	I1205 06:34:48.714700    3816 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:34:48.714747    3816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:34:48.721762    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:34:48.726053    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.800573    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:48.947188    3816 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:34:48.954494    3816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_ID="12"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:34:48.954494    3816 command_runner.go:130] > ID=debian
	I1205 06:34:48.954494    3816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:34:48.954494    3816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:34:48.955010    3816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:34:48.955099    3816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:34:48.955099    3816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:34:48.955806    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:34:48.955806    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /etc/ssl/certs/80362.pem
	I1205 06:34:48.956436    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:34:48.956436    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> /etc/test/nested/copy/8036/hosts
	I1205 06:34:48.960827    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:34:48.973199    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:34:49.002014    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:34:49.027943    3816 start.go:296] duration metric: took 313.2383ms for postStartSetup
	I1205 06:34:49.031806    3816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:34:49.035611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.090476    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.213008    3816 command_runner.go:130] > 1%
	I1205 06:34:49.217907    3816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:34:49.227048    3816 command_runner.go:130] > 950G
	I1205 06:34:49.227093    3816 fix.go:56] duration metric: took 4.0698775s for fixHost
	I1205 06:34:49.227184    3816 start.go:83] releasing machines lock for "functional-247800", held for 4.069942s
	I1205 06:34:49.230591    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:49.286648    3816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:34:49.290773    3816 ssh_runner.go:195] Run: cat /version.json
	I1205 06:34:49.290773    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.294768    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.346982    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.347419    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.463868    3816 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:34:49.468593    3816 ssh_runner.go:195] Run: systemctl --version
	I1205 06:34:49.473361    3816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1205 06:34:49.473361    3816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:34:49.482411    3816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:34:49.482411    3816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:34:49.486655    3816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:34:49.495075    3816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:34:49.495101    3816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:34:49.499557    3816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:34:49.512091    3816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:34:49.512091    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:49.512091    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:49.512091    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:49.534248    3816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:34:49.538479    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:34:49.557417    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:34:49.572725    3816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:34:49.577000    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1205 06:34:49.583562    3816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:34:49.583562    3816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:34:49.600012    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.618632    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:34:49.636357    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.654641    3816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:34:49.675114    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:34:49.696597    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:34:49.715167    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:34:49.738213    3816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:34:49.750303    3816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:34:49.754900    3816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:34:49.771255    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:49.909849    3816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:34:50.068262    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:50.068262    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:50.073308    3816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:34:50.092739    3816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1205 06:34:50.092785    3816 command_runner.go:130] > [Unit]
	I1205 06:34:50.092785    3816 command_runner.go:130] > Description=Docker Application Container Engine
	I1205 06:34:50.092785    3816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1205 06:34:50.092828    3816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1205 06:34:50.092828    3816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1205 06:34:50.092828    3816 command_runner.go:130] > Requires=docker.socket
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitBurst=3
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitIntervalSec=60
	I1205 06:34:50.092884    3816 command_runner.go:130] > [Service]
	I1205 06:34:50.092884    3816 command_runner.go:130] > Type=notify
	I1205 06:34:50.092884    3816 command_runner.go:130] > Restart=always
	I1205 06:34:50.092919    3816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1205 06:34:50.092943    3816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1205 06:34:50.092943    3816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1205 06:34:50.092943    3816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1205 06:34:50.092943    3816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1205 06:34:50.092943    3816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1205 06:34:50.092943    3816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1205 06:34:50.092943    3816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNOFILE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNPROC=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitCORE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1205 06:34:50.092943    3816 command_runner.go:130] > TasksMax=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > TimeoutStartSec=0
	I1205 06:34:50.092943    3816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1205 06:34:50.092943    3816 command_runner.go:130] > Delegate=yes
	I1205 06:34:50.092943    3816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1205 06:34:50.092943    3816 command_runner.go:130] > KillMode=process
	I1205 06:34:50.092943    3816 command_runner.go:130] > OOMScoreAdjust=-500
	I1205 06:34:50.092943    3816 command_runner.go:130] > [Install]
	I1205 06:34:50.092943    3816 command_runner.go:130] > WantedBy=multi-user.target
	I1205 06:34:50.097721    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.125496    3816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:34:50.186929    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.209805    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:34:50.227504    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:50.252330    3816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1205 06:34:50.256641    3816 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:34:50.264328    3816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1205 06:34:50.269234    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:34:50.282005    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:34:50.306573    3816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:34:50.447619    3816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:34:50.580607    3816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:34:50.581126    3816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:34:50.605071    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:34:50.630349    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:50.782135    3816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:34:51.643866    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:34:51.667031    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:34:51.689935    3816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 06:34:51.715903    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:51.740104    3816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:34:51.897148    3816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:34:52.038509    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.188129    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:34:52.216759    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:34:52.241711    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.388958    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:34:52.491038    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:52.508998    3816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:34:52.514460    3816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:34:52.523944    3816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1205 06:34:52.524474    3816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:34:52.524548    3816 command_runner.go:130] > Device: 0,112	Inode: 1756        Links: 1
	I1205 06:34:52.524589    3816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1205 06:34:52.524606    3816 command_runner.go:130] > Access: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524642    3816 command_runner.go:130] > Modify: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] > Change: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] >  Birth: -
	I1205 06:34:52.524737    3816 start.go:564] Will wait 60s for crictl version
	I1205 06:34:52.529361    3816 ssh_runner.go:195] Run: which crictl
	I1205 06:34:52.536028    3816 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:34:52.539850    3816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:34:52.581379    3816 command_runner.go:130] > Version:  0.1.0
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeName:  docker
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeVersion:  29.0.4
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:34:52.581379    3816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:34:52.585592    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.624737    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.628712    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.665154    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.668797    3816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:34:52.672375    3816 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:34:52.798876    3816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:34:52.801876    3816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:34:52.809731    3816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1205 06:34:52.813378    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:52.870537    3816 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:34:52.870721    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:52.873969    3816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:52.909019    3816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 06:34:52.909019    3816 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:34:52.909019    3816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:34:52.909019    3816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:34:52.913141    3816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:34:52.986014    3816 command_runner.go:130] > cgroupfs
	I1205 06:34:52.986014    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:52.986014    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:52.986014    3816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:34:52.986014    3816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:34:52.986014    3816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:34:52.990595    3816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubeadm
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubectl
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubelet
	I1205 06:34:53.003509    3816 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:34:53.008042    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:34:53.020762    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:34:53.041328    3816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:34:53.061676    3816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1205 06:34:53.085180    3816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:34:53.093591    3816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:34:53.098459    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:53.247095    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:53.952452    3816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:34:53.952558    3816 certs.go:195] generating shared ca certs ...
	I1205 06:34:53.952558    3816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:53.953085    3816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:34:53.953228    3816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:34:53.953228    3816 certs.go:257] generating profile certs ...
	I1205 06:34:53.954037    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:34:53.954334    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:34:53.954527    3816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:34:53.954527    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:34:53.954631    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:34:53.954814    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:34:53.954910    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:34:53.954973    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:34:53.955045    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:34:53.955116    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:34:53.955223    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:34:53.955290    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:34:53.955826    3816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:34:53.955954    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:34:53.956129    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:34:53.956912    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:34:53.957083    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem -> /usr/share/ca-certificates/8036.pem
	I1205 06:34:53.957119    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /usr/share/ca-certificates/80362.pem
	I1205 06:34:53.957269    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:53.958214    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:34:53.988313    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:34:54.013387    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:34:54.046063    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:34:54.077041    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:34:54.105745    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:34:54.131011    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:34:54.161212    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:34:54.186054    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:34:54.215522    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:34:54.241991    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:34:54.271902    3816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:34:54.296449    3816 ssh_runner.go:195] Run: openssl version
	I1205 06:34:54.306573    3816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:34:54.311042    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.336884    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:34:54.353148    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.366452    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.412489    3816 command_runner.go:130] > 3ec20f2e
	I1205 06:34:54.416608    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:34:54.434824    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.453553    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:34:54.472739    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481910    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481979    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.485785    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.529492    3816 command_runner.go:130] > b5213941
	I1205 06:34:54.534432    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:34:54.550655    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.568891    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:34:54.588631    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.607947    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.650843    3816 command_runner.go:130] > 51391683
	I1205 06:34:54.656334    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:34:54.673967    3816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.682495    3816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.683019    3816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:34:54.683019    3816 command_runner.go:130] > Device: 8,48	Inode: 15231       Links: 1
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: 2025-12-05 06:30:39.655512939 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Modify: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Change: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] >  Birth: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.687561    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:34:54.732319    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.737009    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:34:54.781446    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.785553    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:34:54.831869    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.837267    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:34:54.879433    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.883677    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:34:54.927800    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.932770    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:34:54.976702    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.977317    3816 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:54.981646    3816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:34:55.016824    3816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:34:55.029851    3816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:34:55.029954    3816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:34:55.029954    3816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:34:55.034067    3816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:34:55.049954    3816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:34:55.054431    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.105351    3816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-247800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.105351    3816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-247800" cluster setting kubeconfig missing "functional-247800" context setting]
	I1205 06:34:55.106335    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.121466    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.122042    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:34:55.123267    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:34:55.127724    3816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:34:55.143728    3816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 06:34:55.143728    3816 kubeadm.go:602] duration metric: took 113.7728ms to restartPrimaryControlPlane
	I1205 06:34:55.143728    3816 kubeadm.go:403] duration metric: took 166.4081ms to StartCluster
	I1205 06:34:55.143728    3816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.143728    3816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.145169    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.145829    3816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 06:34:55.145829    3816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:34:55.145829    3816 addons.go:70] Setting storage-provisioner=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:70] Setting default-storageclass=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:239] Setting addon storage-provisioner=true in "functional-247800"
	I1205 06:34:55.145829    3816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-247800"
	I1205 06:34:55.145829    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:55.145829    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.153665    3816 out.go:179] * Verifying Kubernetes components...
	I1205 06:34:55.154863    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.158249    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.163403    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:55.210939    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.211668    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.212897    3816 addons.go:239] Setting addon default-storageclass=true in "functional-247800"
	I1205 06:34:55.212990    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.213105    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.217433    3816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:55.222787    3816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.222787    3816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:34:55.224705    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.226041    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.278804    3816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.278804    3816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:34:55.278889    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.282998    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.334515    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.337518    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:55.430551    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.457611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.475848    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.517112    3816 node_ready.go:35] waiting up to 6m0s for node "functional-247800" to be "Ready" ...
	I1205 06:34:55.517112    3816 type.go:168] "Request Body" body=""
	I1205 06:34:55.517112    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:55.519131    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:55.528125    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.578790    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.578790    3816 retry.go:31] will retry after 337.958227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.602029    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.605442    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.605442    3816 retry.go:31] will retry after 279.867444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.890357    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.921657    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.969614    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.974371    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.974371    3816 retry.go:31] will retry after 509.000816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.006071    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.010642    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.010642    3816 retry.go:31] will retry after 471.064759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.487937    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:56.489162    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:56.520264    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:56.520264    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:56.523343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:56.575976    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 407.043808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 638.604661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.992080    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.065952    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.069179    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.069179    3816 retry.go:31] will retry after 488.646188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.223461    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.294874    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.299418    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.299514    3816 retry.go:31] will retry after 602.819042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.524155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:57.524155    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:57.527278    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:57.562706    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.639333    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.644388    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.644388    3816 retry.go:31] will retry after 1.399464773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.907870    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.981775    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.984813    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.984921    3816 retry.go:31] will retry after 1.652361939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:58.527501    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:58.527501    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:58.529897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:59.050453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:59.133420    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.139944    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.139944    3816 retry.go:31] will retry after 1.645340531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.530709    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:59.530709    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:59.534391    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:59.642381    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:59.718427    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.721834    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.721834    3816 retry.go:31] will retry after 2.46016532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.534639    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:00.534639    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:00.541150    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:35:00.790675    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:00.867216    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:00.867216    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.867216    3816 retry.go:31] will retry after 3.092416499s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:01.541435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:01.541435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:01.544716    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:02.187405    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:02.268020    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:02.273203    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.273203    3816 retry.go:31] will retry after 2.104673669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.544980    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:02.544980    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:02.548584    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:03.548839    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:03.548839    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:03.553516    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:03.966453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:04.049450    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.054065    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.054065    3816 retry.go:31] will retry after 2.461370012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.382944    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:04.458068    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.461488    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.461488    3816 retry.go:31] will retry after 4.66223575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.554680    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:04.555045    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:04.559246    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:05.559799    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:05.560272    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.563266    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:05.563380    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:05.563407    3816 type.go:168] "Request Body" body=""
	I1205 06:35:05.563407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.565659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.521322    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:06.565857    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:06.565857    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:06.569356    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.601193    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:06.606428    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:06.606428    3816 retry.go:31] will retry after 3.326595593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:07.570311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:07.570658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:07.572699    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:08.573282    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:08.573282    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:08.576531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:09.129039    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:09.217404    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:09.217937    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.217937    3816 retry.go:31] will retry after 6.891085945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.577333    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:09.577333    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:09.580146    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:09.938122    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:10.010022    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:10.013513    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.013513    3816 retry.go:31] will retry after 11.942280673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.581103    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:10.581488    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:10.585509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:11.586198    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:11.586569    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:11.589434    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:12.589851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:12.589851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:12.594400    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:13.595039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:13.595039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:13.598596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:14.599060    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:14.599060    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:14.601840    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:15.602885    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:15.602885    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.605878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:15.605878    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:15.605878    3816 type.go:168] "Request Body" body=""
	I1205 06:35:15.605878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.608593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:16.114246    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:16.191406    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:16.193997    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.193997    3816 retry.go:31] will retry after 14.066483079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.609000    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:16.609000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:16.611991    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:17.612458    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:17.612996    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:17.617813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:18.618806    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:18.618806    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:18.622265    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:19.623287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:19.623287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:19.627037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:20.627291    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:20.627658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:20.630318    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:21.630930    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:21.630930    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:21.635020    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:21.963392    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:22.044084    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:22.048902    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.048902    3816 retry.go:31] will retry after 11.169519715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.635453    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:22.635453    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:22.638251    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:23.639335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:23.639335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:23.642113    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:24.642790    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:24.642790    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:24.645713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:25.646115    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:25.646115    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.649594    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:25.649594    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:25.649594    3816 type.go:168] "Request Body" body=""
	I1205 06:35:25.649594    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.652081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:26.652283    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:26.652283    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:26.656196    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:27.656951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:27.656951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:27.660911    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:28.661511    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:28.661511    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:28.665811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:29.666123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:29.666562    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:29.669285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:30.265388    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:30.346699    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:30.350211    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.350747    3816 retry.go:31] will retry after 20.097178843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.669645    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:30.669645    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:30.673744    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:31.674027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:31.674411    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:31.676873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:32.677707    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:32.677707    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:32.680779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:33.224337    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:33.301595    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:33.304702    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.304702    3816 retry.go:31] will retry after 17.498614608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.681368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:33.681368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:33.685247    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:34.685570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:34.685570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:34.689019    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:35.689478    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:35.689478    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.693423    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:35.693478    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:35.693605    3816 type.go:168] "Request Body" body=""
	I1205 06:35:35.693728    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.697203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:36.697741    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:36.697741    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:36.700841    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:37.701712    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:37.701712    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:37.705613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:38.706497    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:38.706497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:38.709240    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:39.710263    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:39.710263    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:39.714262    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:40.714574    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:40.714574    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:40.717659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:41.717815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:41.717815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:41.720914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:42.722129    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:42.722129    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:42.725427    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:43.726728    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:43.727083    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:43.729850    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:44.730383    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:44.730383    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:44.733852    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:45.735220    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:45.735642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.738135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:45.738135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:45.738135    3816 type.go:168] "Request Body" body=""
	I1205 06:35:45.738135    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.740498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:46.740699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:46.740699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:46.744820    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:47.745629    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:47.746108    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:47.748477    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:48.749130    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:48.749130    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:48.752304    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:49.753459    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:49.753860    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:49.756462    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:50.453778    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:50.536078    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.536601    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.536601    3816 retry.go:31] will retry after 10.835620015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.756979    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:50.756979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:50.760402    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:50.808292    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:50.896096    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.901180    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.901180    3816 retry.go:31] will retry after 25.940426602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:51.761349    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:51.761349    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:51.763343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:35:52.765295    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:52.765295    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:52.768404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:53.769128    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:53.769490    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:53.773090    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:54.773373    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:54.773373    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:54.776047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:55.776319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:55.776319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.779826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:55.779933    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:55.780038    3816 type.go:168] "Request Body" body=""
	I1205 06:35:55.780038    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.782548    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:56.782984    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:56.782984    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:56.786482    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:57.787420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:57.787420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:57.791145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:58.791893    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:58.792215    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:58.795191    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:59.795792    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:59.795792    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:59.798496    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:00.799902    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:00.800226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:00.803690    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:01.377212    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:01.460054    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:01.465324    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.465324    3816 retry.go:31] will retry after 27.628572595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.803905    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:01.803905    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:01.806773    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:02.807252    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:02.807252    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:02.809866    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:03.810536    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:03.810536    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:03.813578    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:04.814042    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:04.814042    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:04.817276    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:05.818288    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:05.818679    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.821810    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:05.821891    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:05.821987    3816 type.go:168] "Request Body" body=""
	I1205 06:36:05.821987    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.824311    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:06.824568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:06.824568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:06.828662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:07.829627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:07.829627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:07.832420    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:08.833221    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:08.833221    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:08.837155    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:09.838074    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:09.838074    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:09.841184    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:10.842375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:10.842375    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:10.844946    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:11.846051    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:11.846051    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:11.849339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:12.849998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:12.850423    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:12.852739    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:13.853070    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:13.853070    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:13.856576    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:14.857697    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:14.857697    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:14.863183    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:15.864368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:15.864368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.868275    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:15.868370    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:15.868414    3816 type.go:168] "Request Body" body=""
	I1205 06:36:15.868524    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.870901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.847285    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:16.871649    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:16.871961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:16.873985    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.928128    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:16.933236    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:16.933236    3816 retry.go:31] will retry after 34.477637514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:17.875167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:17.875167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:17.879555    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:18.879691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:18.879691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:18.882703    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:19.883482    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:19.883482    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:19.886835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:20.887694    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:20.887694    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:20.890798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:21.891367    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:21.891367    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:21.894170    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:22.894555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:22.894555    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:22.898343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:23.898560    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:23.898560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:23.901633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:24.902026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:24.902026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:24.905116    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:25.905658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:25.905658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.908458    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:25.908570    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:25.908723    3816 type.go:168] "Request Body" body=""
	I1205 06:36:25.908723    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.911359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:26.911630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:26.911630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:26.915364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:27.916524    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:27.916824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:27.919661    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:28.920716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:28.920716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:28.923642    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:29.100195    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:29.179813    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.183920    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.184562    3816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:29.924461    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:29.924461    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:29.927800    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:30.928583    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:30.928583    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:30.931166    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:31.931918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:31.931918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:31.935633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:32.936157    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:32.936157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:32.939359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:33.939769    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:33.939769    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:33.943624    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:34.944004    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:34.944410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:34.946809    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:35.948067    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:35.948397    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.951285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:35.951285    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:35.951859    3816 type.go:168] "Request Body" body=""
	I1205 06:36:35.951913    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.956062    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:36.956335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:36.956335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:36.959382    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:37.959668    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:37.959668    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:37.962651    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:38.963737    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:38.963737    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:38.967065    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:39.967557    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:39.967557    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:39.970531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:40.970718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:40.970718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:40.974099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:41.974734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:41.975168    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:41.977669    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:42.977960    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:42.977960    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:42.981583    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:43.982240    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:43.982240    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:43.985849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:44.986627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:44.986627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:44.989945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:45.990505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:45.990505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.993980    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:45.994070    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:45.994133    3816 type.go:168] "Request Body" body=""
	I1205 06:36:45.994133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.996849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:46.997191    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:46.997191    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:47.002502    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:48.002840    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:48.003305    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:48.006657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:49.007253    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:49.007253    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:49.011209    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:50.011465    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:50.011889    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:50.014740    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:51.015805    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:51.015805    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:51.019618    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:51.417352    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:51.854034    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:51.865604    3816 out.go:179] * Enabled addons: 
	I1205 06:36:51.868880    3816 addons.go:530] duration metric: took 1m56.7213702s for enable addons: enabled=[]
	I1205 06:36:52.020718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:52.020718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:52.023235    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:53.023539    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:53.023927    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:53.026996    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:54.027998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:54.027998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:54.032187    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:55.032402    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:55.032402    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:55.036736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:56.037433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:56.037433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.040359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:56.040359    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:56.040359    3816 type.go:168] "Request Body" body=""
	I1205 06:36:56.040359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.043162    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:57.043498    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:57.043941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:57.046650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:58.047193    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:58.047742    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:58.050545    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:59.051297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:59.051297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:59.054095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:00.054646    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:00.054646    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:00.057943    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:01.058170    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:01.058170    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:01.061024    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:02.061200    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:02.061200    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:02.064035    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:03.065365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:03.065365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:03.068662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:04.069784    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:04.070189    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:04.072456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:05.073381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:05.073381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:05.076559    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:06.076793    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:06.076793    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.079598    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:06.079598    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:06.079598    3816 type.go:168] "Request Body" body=""
	I1205 06:37:06.079598    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.082197    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:07.082493    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:07.082493    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:07.085205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:08.086412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:08.086412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:08.089713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:09.090483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:09.090483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:09.093906    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:10.094287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:10.094287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:10.097613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:11.097803    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:11.097803    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:11.101190    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:12.101619    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:12.101619    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:12.104634    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:13.104688    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:13.104688    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:13.108075    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:14.108856    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:14.109198    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:14.113007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:15.113918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:15.113918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:15.116912    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:16.117830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:16.117830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.121438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:16.121438    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:16.121438    3816 type.go:168] "Request Body" body=""
	I1205 06:37:16.121438    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.124099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:17.124588    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:17.124588    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:17.128092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:18.128319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:18.128319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:18.132513    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:19.132736    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:19.132736    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:19.135560    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:20.136515    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:20.136515    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:20.139792    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:21.140167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:21.140471    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:21.143328    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:22.144039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:22.144039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:22.146593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:23.147175    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:23.147543    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:23.150087    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:24.150247    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:24.150247    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:24.154118    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:25.154433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:25.154433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:25.157386    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:26.157568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:26.157568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.160472    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:26.160472    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:26.160472    3816 type.go:168] "Request Body" body=""
	I1205 06:37:26.161000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.162649    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:27.163417    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:27.163417    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:27.167106    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:28.167812    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:28.167812    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:28.170974    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:29.171418    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:29.171418    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:29.174717    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:30.174973    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:30.174973    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:30.179281    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:31.179472    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:31.179472    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:31.182137    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:32.182463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:32.182463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:32.185914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:33.186359    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:33.186359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:33.189745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:34.190102    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:34.190102    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:34.193507    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:35.194094    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:35.194094    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:35.197205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:36.197770    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:36.197770    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.200498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:36.200498    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:36.201020    3816 type.go:168] "Request Body" body=""
	I1205 06:37:36.201099    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.203111    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:37.204025    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:37.204025    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:37.207133    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:38.207447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:38.207447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:38.210787    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:39.211776    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:39.211776    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:39.213772    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:40.214710    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:40.214710    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:40.217616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:41.217767    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:41.217767    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:41.221200    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:42.221683    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:42.222132    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:42.224721    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:43.224982    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:43.224982    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:43.229361    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:44.230310    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:44.230310    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:44.233109    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:45.234073    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:45.234345    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:45.238600    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:46.238845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:46.238845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.242060    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:46.242126    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:46.242126    3816 type.go:168] "Request Body" body=""
	I1205 06:37:46.242126    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.244330    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:47.245532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:47.245532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:47.248646    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:48.249492    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:48.249786    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:48.252034    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:49.252532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:49.252532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:49.255984    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:50.256278    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:50.256278    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:50.260022    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:51.260850    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:51.260850    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:51.262856    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:52.263771    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:52.263771    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:52.266969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:53.267499    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:53.267499    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:53.270917    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:54.271483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:54.271483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:54.273932    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:55.274677    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:55.274677    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:55.277978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:56.278630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:56.278630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.281414    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:56.281414    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:56.281414    3816 type.go:168] "Request Body" body=""
	I1205 06:37:56.281414    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.283686    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:57.283878    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:57.283878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:57.286826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:58.287091    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:58.287091    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:58.290488    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:59.291169    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:59.291169    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:59.293886    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:00.294704    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:00.294704    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:00.297861    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:01.298572    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:01.298961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:01.301760    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:02.302048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:02.302048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:02.304517    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:03.305251    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:03.305251    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:03.307969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:04.308898    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:04.308898    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:04.312237    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:05.313053    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:05.313395    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:05.316566    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:06.316866    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:06.316866    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.319941    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:06.319941    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:06.319941    3816 type.go:168] "Request Body" body=""
	I1205 06:38:06.319941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.322349    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:07.322907    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:07.322907    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:07.325564    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:08.326123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:08.326123    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:08.329670    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:09.330047    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:09.330047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:09.333169    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:10.333628    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:10.333628    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:10.336729    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:11.337447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:11.337447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:11.341026    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:12.342590    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:12.342590    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:12.345509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:13.345779    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:13.345779    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:13.348736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:14.349699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:14.349699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:14.354811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:38:15.355125    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:15.355699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:15.358657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:16.358925    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:16.358925    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.362294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:16.362394    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:16.362515    3816 type.go:168] "Request Body" body=""
	I1205 06:38:16.362576    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.366638    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:17.367505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:17.367505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:17.370390    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:18.371098    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:18.371098    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:18.374694    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:19.375813    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:19.375813    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:19.378371    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:20.378981    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:20.378981    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:20.382504    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:21.382666    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:21.382666    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:21.386056    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:22.386435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:22.386435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:22.389942    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:23.390201    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:23.390201    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:23.394201    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:24.394754    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:24.394754    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:24.399451    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:25.400206    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:25.400654    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:25.403432    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:26.404412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:26.404412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.407565    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:26.407565    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:26.407565    3816 type.go:168] "Request Body" body=""
	I1205 06:38:26.407565    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.410520    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:27.410783    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:27.410783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:27.413528    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:28.415022    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:28.415022    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:28.418437    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:29.419313    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:29.419313    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:29.422536    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:30.423342    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:30.423497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:30.426178    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:31.426933    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:31.426933    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:31.430144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:32.430929    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:32.430929    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:32.434479    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:33.434863    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:33.434863    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:33.437682    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:34.437924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:34.437924    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:34.440945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:35.442134    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:35.442134    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:35.444908    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:36.445071    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:36.445071    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.448284    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:36.448309    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:36.448309    3816 type.go:168] "Request Body" body=""
	I1205 06:38:36.448309    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.450897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:37.451653    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:37.451944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:37.455778    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:38.456494    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:38.456494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:38.459476    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:39.459817    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:39.460047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:39.462801    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:40.464111    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:40.464111    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:40.467438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:41.468570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:41.468570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:41.471499    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:42.471858    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:42.471858    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:42.475786    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:43.476207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:43.476207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:43.479798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:44.480584    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:44.480584    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:44.482596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:45.483834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:45.483834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:45.488465    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:46.488899    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:46.488899    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.492762    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:46.492857    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:46.493009    3816 type.go:168] "Request Body" body=""
	I1205 06:38:46.493069    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.495877    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:47.496162    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:47.496162    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:47.499015    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:48.499326    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:48.499326    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:48.503120    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:49.503509    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:49.503509    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:49.506339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:50.507027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:50.507403    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:50.509404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:51.510410    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:51.510410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:51.513676    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:52.514297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:52.514297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:52.517647    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:53.517908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:53.517908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:53.520862    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:54.521180    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:54.521180    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:54.524895    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:55.526048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:55.526048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:55.529345    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:56.529859    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:56.529859    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.532804    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:38:56.532932    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:56.533087    3816 type.go:168] "Request Body" body=""
	I1205 06:38:56.533133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.534781    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:38:57.535534    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:57.535534    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:57.538765    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:58.538928    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:58.538928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:58.542189    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:59.542538    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:59.542538    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:59.545041    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:00.545961    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:00.545961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:00.549272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:01.550020    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:01.550020    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:01.553982    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:02.554834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:02.554834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:02.557878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:03.558082    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:03.558082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:03.560631    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:04.561450    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:04.561450    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:04.564816    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:05.565884    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:05.565884    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:05.568807    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:06.569924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:06.570101    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.573050    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:06.573172    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:06.573295    3816 type.go:168] "Request Body" body=""
	I1205 06:39:06.573378    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.577668    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:07.578044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:07.578044    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:07.580203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:08.581555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:08.581760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:08.584347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:09.585050    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:09.585050    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:09.587469    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:10.588187    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:10.588187    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:10.592992    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:11.593285    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:11.593285    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:11.596552    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:12.597368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:12.597368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:12.599206    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:39:13.600760    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:13.600760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:13.604095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:14.604815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:14.604815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:14.607416    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:15.607824    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:15.607824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:15.611182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:16.612388    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:16.612388    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.615128    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:16.615128    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:16.615128    3816 type.go:168] "Request Body" body=""
	I1205 06:39:16.615128    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.617381    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:17.617837    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:17.617837    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:17.621309    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:18.622420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:18.622420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:18.625659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:19.626064    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:19.626064    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:19.630047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:20.631021    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:20.631425    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:20.634272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:21.634593    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:21.634593    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:21.637617    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:22.638437    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:22.638928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:22.642027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:23.643026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:23.643026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:23.646144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:24.646864    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:24.647232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:24.650759    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:25.651017    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:25.651017    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:25.654375    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:26.655043    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:26.655043    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.658286    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:39:26.658286    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:26.658286    3816 type.go:168] "Request Body" body=""
	I1205 06:39:26.658286    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.660775    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:27.661714    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:27.661714    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:27.667334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:39:28.667862    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:28.667862    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:28.672081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:29.672167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:29.672167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:29.674745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:30.676280    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:30.676280    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:30.679395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:31.679835    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:31.679835    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:31.682978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:32.684077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:32.684077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:32.686823    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:33.687836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:33.687836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:33.691156    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:34.691521    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:34.691521    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:34.693937    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:35.694845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:35.694845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:35.698294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:36.699532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:36.699532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.702195    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:36.702717    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:36.702862    3816 type.go:168] "Request Body" body=""
	I1205 06:39:36.702916    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.706473    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:37.707504    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:37.707504    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:37.710813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:38.710939    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:38.711535    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:38.716232    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:39.717207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:39.717207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:39.720152    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:40.720331    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:40.720331    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:40.722990    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:41.723691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:41.723691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:41.726966    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:42.727268    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:42.727268    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:42.731157    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:43.731449    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:43.731449    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:43.733873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:44.734365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:44.734365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:44.737250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:45.738219    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:45.738219    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:45.741606    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:46.742116    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:46.742448    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.744702    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:46.745230    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:46.745415    3816 type.go:168] "Request Body" body=""
	I1205 06:39:46.745518    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.747577    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:47.748110    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:47.748110    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:47.751287    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:48.751998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:48.751998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:48.755225    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:49.756362    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:49.756362    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:49.758876    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:50.759512    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:50.759512    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:50.762228    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:51.762926    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:51.762926    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:51.766327    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:52.766951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:52.766951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:52.770535    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:53.771298    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:53.771298    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:53.774215    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:54.774580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:54.774580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:54.777547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:55.778421    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:55.778421    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:55.781650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:56.782155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:56.783007    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.785844    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:56.785844    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:56.785844    3816 type.go:168] "Request Body" body=""
	I1205 06:39:56.785844    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.788526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:57.788851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:57.788851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:57.791811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:58.792393    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:58.792393    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:58.796105    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:59.796407    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:59.796407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:59.799250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:00.799796    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:00.799796    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:00.803018    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:01.803711    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:01.803711    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:01.806363    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:02.806549    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:02.806979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:02.810046    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:03.810372    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:03.810808    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:03.813835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:04.814104    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:04.814104    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:04.817217    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:05.817542    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:05.817985    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:05.820814    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:06.821479    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:06.821479    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.825616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:06.825616    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:06.825616    3816 type.go:168] "Request Body" body=""
	I1205 06:40:06.825616    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.828168    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:07.828495    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:07.828495    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:07.831826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:08.832009    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:08.832009    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:08.834677    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:09.834944    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:09.834944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:09.838182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:10.838841    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:10.838841    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:10.842122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:11.842336    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:11.842336    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:11.845418    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:12.846381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:12.846722    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:12.849321    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:13.849671    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:13.850100    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:13.852968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:14.853642    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:14.853642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:14.856503    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:15.856908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:15.856908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:15.861027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:16.862019    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:16.862328    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.864135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:40:16.864135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:16.864135    3816 type.go:168] "Request Body" body=""
	I1205 06:40:16.864652    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.866384    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:17.867632    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:17.867632    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:17.870561    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:18.871085    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:18.871085    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:18.874523    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:19.874746    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:19.874746    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:19.877529    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:20.878119    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:20.878119    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:20.881395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:21.881716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:21.881716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:21.884145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:22.884876    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:22.884876    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:22.887889    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:23.888341    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:23.888494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:23.891334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:24.891830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:24.891830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:24.895547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:25.896077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:25.896077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:25.898755    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:26.899940    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:26.899940    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.903829    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:40:26.903925    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:26.904028    3816 type.go:168] "Request Body" body=""
	I1205 06:40:26.904082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.907442    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:27.907744    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:27.907744    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:27.911092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:28.911316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:28.911316    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:28.914347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:29.914739    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:29.914739    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:29.918366    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:30.918822    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:30.918822    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:30.921456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:31.922028    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:31.922028    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:31.925069    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:32.925330    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:32.925330    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:32.928779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:33.929376    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:33.929376    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:33.933212    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:34.933571    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:34.933571    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:34.936160    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:35.937442    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:35.937442    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:35.941103    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:36.941232    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:36.941232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.943558    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:36.943558    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:36.943558    3816 type.go:168] "Request Body" body=""
	I1205 06:40:36.943558    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.946031    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:37.946448    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:37.946847    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:37.949586    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:38.949756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:38.950157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:38.952901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:39.953375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:39.953783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:39.956248    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:40.957703    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:40.957703    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:40.960899    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:41.961836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:41.961836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:41.965167    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:42.965316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:42.965560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:42.968007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:43.968734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:43.968734    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:43.971410    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:44.972311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:44.972311    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:44.975433    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:45.976381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:45.976381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:45.981080    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:46.981463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:46.981463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.986037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:46.986125    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:46.986226    3816 type.go:168] "Request Body" body=""
	I1205 06:40:46.986226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.989122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:47.989324    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:47.989324    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:47.992720    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:48.992852    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:48.992852    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:48.995205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:49.995580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:49.995580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:49.998526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:50.998794    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:50.998794    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:51.001637    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:52.002658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:52.002658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:52.004968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:53.005044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:53.005445    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:53.008445    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:54.009089    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:54.009089    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:54.012447    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:55.012756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:55.012756    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:55.015364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:55.523386    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 06:40:55.523386    3816 node_ready.go:38] duration metric: took 6m0.0010607s for node "functional-247800" to be "Ready" ...
	I1205 06:40:55.527309    3816 out.go:203] 
	W1205 06:40:55.529851    3816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:40:55.529851    3816 out.go:285] * 
	W1205 06:40:55.531579    3816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:40:55.533404    3816 out.go:203] 
	
	
	==> Docker <==
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.520999227Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521005327Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521028530Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521065534Z" level=info msg="Initializing buildkit"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.631468044Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636567622Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725240Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636825651Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725440Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:34:51 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:51 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:34:52 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:34:52 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:41:52.844589   18865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:41:52.845527   18865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:41:52.846758   18865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:41:52.848534   18865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:41:52.850732   18865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001158] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001030] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001035] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000969] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000975] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:34] CPU: 4 PID: 56451 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000864] RIP: 0033:0x7f46c5e3eb20
	[  +0.000406] Code: Unable to access opcode bytes at RIP 0x7f46c5e3eaf6.
	[  +0.000950] RSP: 002b:00007fff1eb3d7e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001108] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001199] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000983] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000845] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000799] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000884] FS:  0000000000000000 GS:  0000000000000000
	[  +0.829311] CPU: 0 PID: 56573 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000781] RIP: 0033:0x7f241df52b20
	[  +0.000533] Code: Unable to access opcode bytes at RIP 0x7f241df52af6.
	[  +0.000663] RSP: 002b:00007ffded7fa4e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000781] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:41:52 up  2:15,  0 user,  load average: 0.40, 0.36, 0.60
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:41:49 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:49 functional-247800 kubelet[18708]: E1205 06:41:49.953457   18708 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:41:49 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:41:49 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:41:50 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 892.
	Dec 05 06:41:50 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:50 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:50 functional-247800 kubelet[18719]: E1205 06:41:50.770898   18719 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:41:50 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:41:50 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:41:51 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 893.
	Dec 05 06:41:51 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:51 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:51 functional-247800 kubelet[18747]: E1205 06:41:51.495044   18747 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:41:51 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:41:51 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:41:52 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 894.
	Dec 05 06:41:52 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:52 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:52 functional-247800 kubelet[18775]: E1205 06:41:52.242863   18775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:41:52 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:41:52 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:41:52 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 895.
	Dec 05 06:41:52 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:41:52 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (605.8698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (54.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (54.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 kubectl -- --context functional-247800 get pods
E1205 06:42:23.904240    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:731: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 kubectl -- --context functional-247800 get pods: exit status 1 (50.597526s)

                                                
                                                
** stderr ** 
	E1205 06:42:23.834152    7748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:42:33.920288    7748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:42:43.963633    7748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:42:54.002673    7748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:43:04.045639    7748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-247800 kubectl -- --context functional-247800 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (694.9615ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.6261861s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-088800 image ls --format short --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format yaml --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ ssh     │ functional-088800 ssh pgrep buildkitd                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ image   │ functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr                  │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls                                                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format json --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format table --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ delete  │ -p functional-088800                                                                                                    │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:26 UTC │
	│ start   │ -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:26 UTC │                     │
	│ start   │ -p functional-247800 --alsologtostderr -v=8                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:34 UTC │                     │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:41 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:latest                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add minikube-local-cache-test:functional-247800                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache delete minikube-local-cache-test:functional-247800                                              │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl images                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ cache   │ functional-247800 cache reload                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ kubectl │ functional-247800 kubectl -- --context functional-247800 get pods                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:34:44
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:34:43.990318    3816 out.go:360] Setting OutFile to fd 932 ...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.034404    3816 out.go:374] Setting ErrFile to fd 1564...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.048005    3816 out.go:368] Setting JSON to false
	I1205 06:34:44.051134    3816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7741,"bootTime":1764908742,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:34:44.051134    3816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:34:44.054997    3816 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:34:44.057041    3816 notify.go:221] Checking for updates...
	I1205 06:34:44.057041    3816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:44.060615    3816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:34:44.063386    3816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:34:44.065338    3816 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:34:44.068100    3816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:34:44.070765    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:44.071546    3816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:34:44.185014    3816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:34:44.190117    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.434951    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.415349563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.438948    3816 out.go:179] * Using the docker driver based on existing profile
	I1205 06:34:44.442716    3816 start.go:309] selected driver: docker
	I1205 06:34:44.442716    3816 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.442716    3816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:34:44.449451    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.693650    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.673163701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.776708    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:44.776708    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:44.776708    3816 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.779353    3816 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:34:44.789396    3816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:34:44.793121    3816 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:34:44.794774    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:44.794774    3816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:34:44.844630    3816 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:44.871213    3816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:34:44.871213    3816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:34:45.153466    3816 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:45.154472    3816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:34:45.156762    3816 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:34:45.156819    3816 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:45.157157    3816 start.go:364] duration metric: took 122.3µs to acquireMachinesLock for "functional-247800"
	I1205 06:34:45.157157    3816 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:34:45.157157    3816 fix.go:54] fixHost starting: 
	I1205 06:34:45.165313    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:45.243648    3816 fix.go:112] recreateIfNeeded on functional-247800: state=Running err=<nil>
	W1205 06:34:45.243648    3816 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:34:45.267762    3816 out.go:252] * Updating the running docker "functional-247800" container ...
	I1205 06:34:45.269766    3816 machine.go:94] provisionDockerMachine start ...
	I1205 06:34:45.274766    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:45.449049    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:45.449049    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:45.449049    3816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:34:45.686505    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:45.686505    3816 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:34:45.691507    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:46.703091    3816 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800: (1.0115691s)
	I1205 06:34:46.706016    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:46.706016    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:46.706016    3816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:34:47.035712    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:47.042684    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.107199    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:47.107199    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:47.107199    3816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:34:47.308149    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:47.308197    3816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:34:47.308318    3816 ubuntu.go:190] setting up certificates
	I1205 06:34:47.308318    3816 provision.go:84] configureAuth start
	I1205 06:34:47.315253    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:47.380504    3816 provision.go:143] copyHostCerts
	I1205 06:34:47.381517    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:34:47.381517    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:34:47.382508    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:34:47.382508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:34:47.383507    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:34:47.384508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:34:47.385507    3816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:34:47.573727    3816 provision.go:177] copyRemoteCerts
	I1205 06:34:47.580429    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:34:47.585428    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.664000    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:47.815162    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1205 06:34:47.815801    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:34:47.849954    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1205 06:34:47.850956    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:34:47.876175    3816 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.876248    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:34:47.876248    3816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.7217371s
	I1205 06:34:47.876248    3816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:34:47.883801    3816 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.883881    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:34:47.883881    3816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.72937s
	I1205 06:34:47.883881    3816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:34:47.908586    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1205 06:34:47.909421    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:34:47.925048    3816 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.925345    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:34:47.925345    3816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.7708333s
	I1205 06:34:47.925345    3816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:34:47.926059    3816 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.926059    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:34:47.926059    3816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.7715471s
	I1205 06:34:47.926059    3816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:34:47.936781    3816 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.937442    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:34:47.937555    3816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.7830428s
	I1205 06:34:47.937609    3816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:34:47.946154    3816 provision.go:87] duration metric: took 637.8269ms to configureAuth
	I1205 06:34:47.946231    3816 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:34:47.946358    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:47.951931    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.990646    3816 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.990646    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:34:47.991641    3816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8371282s
	I1205 06:34:47.991641    3816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:34:48.007838    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.008431    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.008476    3816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:34:48.018898    3816 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.018898    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:34:48.018898    3816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.8643851s
	I1205 06:34:48.018898    3816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:34:48.061664    3816 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.062004    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:34:48.062141    3816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9076274s
	I1205 06:34:48.062141    3816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:34:48.062198    3816 cache.go:87] Successfully saved all images to host disk.
	I1205 06:34:48.196159    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:34:48.196159    3816 ubuntu.go:71] root file system type: overlay
	I1205 06:34:48.196159    3816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:34:48.200167    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.256431    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.257239    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.257347    3816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:34:48.462598    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:34:48.466014    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.522845    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.523383    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.523415    3816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:34:48.714113    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:48.714641    3816 machine.go:97] duration metric: took 3.444826s to provisionDockerMachine
	I1205 06:34:48.714700    3816 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:34:48.714747    3816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:34:48.721762    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:34:48.726053    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.800573    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:48.947188    3816 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:34:48.954494    3816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_ID="12"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:34:48.954494    3816 command_runner.go:130] > ID=debian
	I1205 06:34:48.954494    3816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:34:48.954494    3816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:34:48.955010    3816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:34:48.955099    3816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:34:48.955099    3816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:34:48.955806    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:34:48.955806    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /etc/ssl/certs/80362.pem
	I1205 06:34:48.956436    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:34:48.956436    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> /etc/test/nested/copy/8036/hosts
	I1205 06:34:48.960827    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:34:48.973199    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:34:49.002014    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:34:49.027943    3816 start.go:296] duration metric: took 313.2383ms for postStartSetup
	I1205 06:34:49.031806    3816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:34:49.035611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.090476    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.213008    3816 command_runner.go:130] > 1%
	I1205 06:34:49.217907    3816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:34:49.227048    3816 command_runner.go:130] > 950G
	I1205 06:34:49.227093    3816 fix.go:56] duration metric: took 4.0698775s for fixHost
	I1205 06:34:49.227184    3816 start.go:83] releasing machines lock for "functional-247800", held for 4.069942s
	I1205 06:34:49.230591    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:49.286648    3816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:34:49.290773    3816 ssh_runner.go:195] Run: cat /version.json
	I1205 06:34:49.290773    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.294768    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.346982    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.347419    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.463868    3816 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:34:49.468593    3816 ssh_runner.go:195] Run: systemctl --version
	I1205 06:34:49.473361    3816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1205 06:34:49.473361    3816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:34:49.482411    3816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:34:49.482411    3816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:34:49.486655    3816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:34:49.495075    3816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:34:49.495101    3816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:34:49.499557    3816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:34:49.512091    3816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:34:49.512091    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:49.512091    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:49.512091    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:49.534248    3816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:34:49.538479    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:34:49.557417    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:34:49.572725    3816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:34:49.577000    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1205 06:34:49.583562    3816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:34:49.583562    3816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:34:49.600012    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.618632    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:34:49.636357    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.654641    3816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:34:49.675114    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:34:49.696597    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:34:49.715167    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:34:49.738213    3816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:34:49.750303    3816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:34:49.754900    3816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:34:49.771255    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:49.909849    3816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:34:50.068262    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:50.068262    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:50.073308    3816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:34:50.092739    3816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1205 06:34:50.092785    3816 command_runner.go:130] > [Unit]
	I1205 06:34:50.092785    3816 command_runner.go:130] > Description=Docker Application Container Engine
	I1205 06:34:50.092785    3816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1205 06:34:50.092828    3816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1205 06:34:50.092828    3816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1205 06:34:50.092828    3816 command_runner.go:130] > Requires=docker.socket
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitBurst=3
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitIntervalSec=60
	I1205 06:34:50.092884    3816 command_runner.go:130] > [Service]
	I1205 06:34:50.092884    3816 command_runner.go:130] > Type=notify
	I1205 06:34:50.092884    3816 command_runner.go:130] > Restart=always
	I1205 06:34:50.092919    3816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1205 06:34:50.092943    3816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1205 06:34:50.092943    3816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1205 06:34:50.092943    3816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1205 06:34:50.092943    3816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1205 06:34:50.092943    3816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1205 06:34:50.092943    3816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1205 06:34:50.092943    3816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNOFILE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNPROC=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitCORE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1205 06:34:50.092943    3816 command_runner.go:130] > TasksMax=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > TimeoutStartSec=0
	I1205 06:34:50.092943    3816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1205 06:34:50.092943    3816 command_runner.go:130] > Delegate=yes
	I1205 06:34:50.092943    3816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1205 06:34:50.092943    3816 command_runner.go:130] > KillMode=process
	I1205 06:34:50.092943    3816 command_runner.go:130] > OOMScoreAdjust=-500
	I1205 06:34:50.092943    3816 command_runner.go:130] > [Install]
	I1205 06:34:50.092943    3816 command_runner.go:130] > WantedBy=multi-user.target
	I1205 06:34:50.097721    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.125496    3816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:34:50.186929    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.209805    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:34:50.227504    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:50.252330    3816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1205 06:34:50.256641    3816 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:34:50.264328    3816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1205 06:34:50.269234    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:34:50.282005    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:34:50.306573    3816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:34:50.447619    3816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:34:50.580607    3816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:34:50.581126    3816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:34:50.605071    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:34:50.630349    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:50.782135    3816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:34:51.643866    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:34:51.667031    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:34:51.689935    3816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 06:34:51.715903    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:51.740104    3816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:34:51.897148    3816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:34:52.038509    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.188129    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:34:52.216759    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:34:52.241711    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.388958    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:34:52.491038    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:52.508998    3816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:34:52.514460    3816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:34:52.523944    3816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1205 06:34:52.524474    3816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:34:52.524548    3816 command_runner.go:130] > Device: 0,112	Inode: 1756        Links: 1
	I1205 06:34:52.524589    3816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1205 06:34:52.524606    3816 command_runner.go:130] > Access: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524642    3816 command_runner.go:130] > Modify: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] > Change: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] >  Birth: -
	I1205 06:34:52.524737    3816 start.go:564] Will wait 60s for crictl version
	I1205 06:34:52.529361    3816 ssh_runner.go:195] Run: which crictl
	I1205 06:34:52.536028    3816 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:34:52.539850    3816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:34:52.581379    3816 command_runner.go:130] > Version:  0.1.0
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeName:  docker
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeVersion:  29.0.4
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:34:52.581379    3816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:34:52.585592    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.624737    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.628712    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.665154    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.668797    3816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:34:52.672375    3816 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:34:52.798876    3816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:34:52.801876    3816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:34:52.809731    3816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1205 06:34:52.813378    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:52.870537    3816 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:34:52.870721    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:52.873969    3816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:52.909019    3816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 06:34:52.909019    3816 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:34:52.909019    3816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:34:52.909019    3816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:34:52.913141    3816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:34:52.986014    3816 command_runner.go:130] > cgroupfs
	I1205 06:34:52.986014    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:52.986014    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:52.986014    3816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:34:52.986014    3816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:34:52.986014    3816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:34:52.990595    3816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubeadm
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubectl
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubelet
	I1205 06:34:53.003509    3816 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:34:53.008042    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:34:53.020762    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:34:53.041328    3816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:34:53.061676    3816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1205 06:34:53.085180    3816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:34:53.093591    3816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:34:53.098459    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:53.247095    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:53.952452    3816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:34:53.952558    3816 certs.go:195] generating shared ca certs ...
	I1205 06:34:53.952558    3816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:53.953085    3816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:34:53.953228    3816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:34:53.953228    3816 certs.go:257] generating profile certs ...
	I1205 06:34:53.954037    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:34:53.954334    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:34:53.954527    3816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:34:53.954527    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:34:53.954631    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:34:53.954814    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:34:53.954910    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:34:53.954973    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:34:53.955045    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:34:53.955116    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:34:53.955223    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:34:53.955290    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:34:53.955826    3816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:34:53.955954    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:34:53.956129    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:34:53.956912    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:34:53.957083    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem -> /usr/share/ca-certificates/8036.pem
	I1205 06:34:53.957119    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /usr/share/ca-certificates/80362.pem
	I1205 06:34:53.957269    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:53.958214    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:34:53.988313    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:34:54.013387    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:34:54.046063    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:34:54.077041    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:34:54.105745    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:34:54.131011    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:34:54.161212    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:34:54.186054    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:34:54.215522    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:34:54.241991    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:34:54.271902    3816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:34:54.296449    3816 ssh_runner.go:195] Run: openssl version
	I1205 06:34:54.306573    3816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:34:54.311042    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.336884    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:34:54.353148    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.366452    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.412489    3816 command_runner.go:130] > 3ec20f2e
	I1205 06:34:54.416608    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:34:54.434824    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.453553    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:34:54.472739    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481910    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481979    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.485785    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.529492    3816 command_runner.go:130] > b5213941
	I1205 06:34:54.534432    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:34:54.550655    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.568891    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:34:54.588631    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.607947    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.650843    3816 command_runner.go:130] > 51391683
	I1205 06:34:54.656334    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:34:54.673967    3816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.682495    3816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.683019    3816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:34:54.683019    3816 command_runner.go:130] > Device: 8,48	Inode: 15231       Links: 1
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: 2025-12-05 06:30:39.655512939 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Modify: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Change: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] >  Birth: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.687561    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:34:54.732319    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.737009    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:34:54.781446    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.785553    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:34:54.831869    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.837267    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:34:54.879433    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.883677    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:34:54.927800    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.932770    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:34:54.976702    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.977317    3816 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:54.981646    3816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:34:55.016824    3816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:34:55.029851    3816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:34:55.029954    3816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:34:55.029954    3816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:34:55.034067    3816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:34:55.049954    3816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:34:55.054431    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.105351    3816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-247800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.105351    3816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-247800" cluster setting kubeconfig missing "functional-247800" context setting]
	I1205 06:34:55.106335    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.121466    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.122042    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:34:55.123267    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:34:55.127724    3816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:34:55.143728    3816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 06:34:55.143728    3816 kubeadm.go:602] duration metric: took 113.7728ms to restartPrimaryControlPlane
	I1205 06:34:55.143728    3816 kubeadm.go:403] duration metric: took 166.4081ms to StartCluster
	I1205 06:34:55.143728    3816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.143728    3816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.145169    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.145829    3816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 06:34:55.145829    3816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:34:55.145829    3816 addons.go:70] Setting storage-provisioner=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:70] Setting default-storageclass=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:239] Setting addon storage-provisioner=true in "functional-247800"
	I1205 06:34:55.145829    3816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-247800"
	I1205 06:34:55.145829    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:55.145829    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.153665    3816 out.go:179] * Verifying Kubernetes components...
	I1205 06:34:55.154863    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.158249    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.163403    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:55.210939    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.211668    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.212897    3816 addons.go:239] Setting addon default-storageclass=true in "functional-247800"
	I1205 06:34:55.212990    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.213105    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.217433    3816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:55.222787    3816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.222787    3816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:34:55.224705    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.226041    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.278804    3816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.278804    3816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:34:55.278889    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.282998    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.334515    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.337518    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:55.430551    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.457611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.475848    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.517112    3816 node_ready.go:35] waiting up to 6m0s for node "functional-247800" to be "Ready" ...
	I1205 06:34:55.517112    3816 type.go:168] "Request Body" body=""
	I1205 06:34:55.517112    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:55.519131    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:55.528125    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.578790    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.578790    3816 retry.go:31] will retry after 337.958227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.602029    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.605442    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.605442    3816 retry.go:31] will retry after 279.867444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.890357    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.921657    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.969614    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.974371    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.974371    3816 retry.go:31] will retry after 509.000816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.006071    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.010642    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.010642    3816 retry.go:31] will retry after 471.064759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.487937    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:56.489162    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:56.520264    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:56.520264    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:56.523343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:56.575976    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 407.043808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 638.604661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.992080    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.065952    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.069179    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.069179    3816 retry.go:31] will retry after 488.646188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.223461    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.294874    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.299418    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.299514    3816 retry.go:31] will retry after 602.819042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.524155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:57.524155    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:57.527278    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:57.562706    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.639333    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.644388    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.644388    3816 retry.go:31] will retry after 1.399464773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.907870    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.981775    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.984813    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.984921    3816 retry.go:31] will retry after 1.652361939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:58.527501    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:58.527501    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:58.529897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:59.050453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:59.133420    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.139944    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.139944    3816 retry.go:31] will retry after 1.645340531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.530709    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:59.530709    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:59.534391    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:59.642381    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:59.718427    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.721834    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.721834    3816 retry.go:31] will retry after 2.46016532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.534639    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:00.534639    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:00.541150    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:35:00.790675    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:00.867216    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:00.867216    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.867216    3816 retry.go:31] will retry after 3.092416499s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:01.541435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:01.541435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:01.544716    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:02.187405    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:02.268020    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:02.273203    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.273203    3816 retry.go:31] will retry after 2.104673669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.544980    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:02.544980    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:02.548584    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:03.548839    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:03.548839    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:03.553516    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:03.966453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:04.049450    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.054065    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.054065    3816 retry.go:31] will retry after 2.461370012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.382944    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:04.458068    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.461488    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.461488    3816 retry.go:31] will retry after 4.66223575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.554680    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:04.555045    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:04.559246    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:05.559799    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:05.560272    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.563266    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:05.563380    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:05.563407    3816 type.go:168] "Request Body" body=""
	I1205 06:35:05.563407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.565659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.521322    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:06.565857    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:06.565857    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:06.569356    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.601193    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:06.606428    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:06.606428    3816 retry.go:31] will retry after 3.326595593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:07.570311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:07.570658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:07.572699    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:08.573282    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:08.573282    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:08.576531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:09.129039    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:09.217404    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:09.217937    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.217937    3816 retry.go:31] will retry after 6.891085945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.577333    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:09.577333    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:09.580146    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:09.938122    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:10.010022    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:10.013513    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.013513    3816 retry.go:31] will retry after 11.942280673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.581103    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:10.581488    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:10.585509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:11.586198    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:11.586569    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:11.589434    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:12.589851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:12.589851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:12.594400    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:13.595039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:13.595039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:13.598596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:14.599060    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:14.599060    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:14.601840    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:15.602885    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:15.602885    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.605878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:15.605878    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:15.605878    3816 type.go:168] "Request Body" body=""
	I1205 06:35:15.605878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.608593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:16.114246    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:16.191406    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:16.193997    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.193997    3816 retry.go:31] will retry after 14.066483079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.609000    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:16.609000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:16.611991    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:17.612458    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:17.612996    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:17.617813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:18.618806    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:18.618806    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:18.622265    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:19.623287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:19.623287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:19.627037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:20.627291    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:20.627658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:20.630318    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:21.630930    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:21.630930    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:21.635020    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:21.963392    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:22.044084    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:22.048902    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.048902    3816 retry.go:31] will retry after 11.169519715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.635453    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:22.635453    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:22.638251    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:23.639335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:23.639335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:23.642113    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:24.642790    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:24.642790    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:24.645713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:25.646115    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:25.646115    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.649594    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:25.649594    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:25.649594    3816 type.go:168] "Request Body" body=""
	I1205 06:35:25.649594    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.652081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:26.652283    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:26.652283    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:26.656196    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:27.656951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:27.656951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:27.660911    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:28.661511    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:28.661511    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:28.665811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:29.666123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:29.666562    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:29.669285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:30.265388    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:30.346699    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:30.350211    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.350747    3816 retry.go:31] will retry after 20.097178843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.669645    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:30.669645    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:30.673744    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:31.674027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:31.674411    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:31.676873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:32.677707    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:32.677707    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:32.680779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:33.224337    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:33.301595    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:33.304702    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.304702    3816 retry.go:31] will retry after 17.498614608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.681368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:33.681368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:33.685247    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:34.685570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:34.685570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:34.689019    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:35.689478    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:35.689478    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.693423    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:35.693478    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:35.693605    3816 type.go:168] "Request Body" body=""
	I1205 06:35:35.693728    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.697203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:36.697741    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:36.697741    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:36.700841    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:37.701712    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:37.701712    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:37.705613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:38.706497    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:38.706497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:38.709240    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:39.710263    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:39.710263    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:39.714262    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:40.714574    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:40.714574    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:40.717659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:41.717815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:41.717815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:41.720914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:42.722129    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:42.722129    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:42.725427    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:43.726728    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:43.727083    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:43.729850    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:44.730383    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:44.730383    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:44.733852    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:45.735220    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:45.735642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.738135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:45.738135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:45.738135    3816 type.go:168] "Request Body" body=""
	I1205 06:35:45.738135    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.740498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:46.740699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:46.740699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:46.744820    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:47.745629    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:47.746108    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:47.748477    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:48.749130    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:48.749130    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:48.752304    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:49.753459    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:49.753860    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:49.756462    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:50.453778    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:50.536078    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.536601    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.536601    3816 retry.go:31] will retry after 10.835620015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.756979    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:50.756979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:50.760402    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:50.808292    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:50.896096    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.901180    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.901180    3816 retry.go:31] will retry after 25.940426602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:51.761349    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:51.761349    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:51.763343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:35:52.765295    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:52.765295    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:52.768404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:53.769128    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:53.769490    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:53.773090    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:54.773373    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:54.773373    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:54.776047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:55.776319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:55.776319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.779826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:55.779933    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:55.780038    3816 type.go:168] "Request Body" body=""
	I1205 06:35:55.780038    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.782548    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:56.782984    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:56.782984    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:56.786482    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:57.787420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:57.787420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:57.791145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:58.791893    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:58.792215    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:58.795191    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:59.795792    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:59.795792    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:59.798496    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:00.799902    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:00.800226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:00.803690    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:01.377212    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:01.460054    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:01.465324    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.465324    3816 retry.go:31] will retry after 27.628572595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.803905    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:01.803905    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:01.806773    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:02.807252    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:02.807252    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:02.809866    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:03.810536    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:03.810536    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:03.813578    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:04.814042    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:04.814042    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:04.817276    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:05.818288    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:05.818679    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.821810    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:05.821891    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:05.821987    3816 type.go:168] "Request Body" body=""
	I1205 06:36:05.821987    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.824311    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:06.824568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:06.824568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:06.828662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:07.829627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:07.829627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:07.832420    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:08.833221    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:08.833221    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:08.837155    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:09.838074    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:09.838074    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:09.841184    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:10.842375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:10.842375    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:10.844946    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:11.846051    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:11.846051    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:11.849339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:12.849998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:12.850423    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:12.852739    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:13.853070    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:13.853070    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:13.856576    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:14.857697    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:14.857697    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:14.863183    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:15.864368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:15.864368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.868275    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:15.868370    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:15.868414    3816 type.go:168] "Request Body" body=""
	I1205 06:36:15.868524    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.870901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.847285    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:16.871649    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:16.871961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:16.873985    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.928128    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:16.933236    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:16.933236    3816 retry.go:31] will retry after 34.477637514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:17.875167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:17.875167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:17.879555    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:18.879691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:18.879691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:18.882703    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:19.883482    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:19.883482    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:19.886835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:20.887694    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:20.887694    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:20.890798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:21.891367    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:21.891367    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:21.894170    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:22.894555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:22.894555    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:22.898343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:23.898560    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:23.898560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:23.901633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:24.902026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:24.902026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:24.905116    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:25.905658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:25.905658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.908458    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:25.908570    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:25.908723    3816 type.go:168] "Request Body" body=""
	I1205 06:36:25.908723    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.911359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:26.911630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:26.911630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:26.915364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:27.916524    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:27.916824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:27.919661    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:28.920716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:28.920716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:28.923642    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:29.100195    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:29.179813    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.183920    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.184562    3816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:29.924461    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:29.924461    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:29.927800    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:30.928583    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:30.928583    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:30.931166    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:31.931918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:31.931918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:31.935633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:32.936157    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:32.936157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:32.939359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:33.939769    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:33.939769    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:33.943624    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:34.944004    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:34.944410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:34.946809    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:35.948067    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:35.948397    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.951285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:35.951285    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:35.951859    3816 type.go:168] "Request Body" body=""
	I1205 06:36:35.951913    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.956062    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:36.956335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:36.956335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:36.959382    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:37.959668    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:37.959668    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:37.962651    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:38.963737    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:38.963737    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:38.967065    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:39.967557    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:39.967557    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:39.970531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:40.970718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:40.970718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:40.974099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:41.974734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:41.975168    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:41.977669    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:42.977960    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:42.977960    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:42.981583    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:43.982240    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:43.982240    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:43.985849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:44.986627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:44.986627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:44.989945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:45.990505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:45.990505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.993980    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:45.994070    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:45.994133    3816 type.go:168] "Request Body" body=""
	I1205 06:36:45.994133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.996849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:46.997191    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:46.997191    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:47.002502    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:48.002840    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:48.003305    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:48.006657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:49.007253    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:49.007253    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:49.011209    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:50.011465    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:50.011889    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:50.014740    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:51.015805    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:51.015805    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:51.019618    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:51.417352    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:51.854034    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:51.865604    3816 out.go:179] * Enabled addons: 
	I1205 06:36:51.868880    3816 addons.go:530] duration metric: took 1m56.7213702s for enable addons: enabled=[]
	I1205 06:36:52.020718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:52.020718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:52.023235    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:53.023539    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:53.023927    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:53.026996    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:54.027998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:54.027998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:54.032187    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:55.032402    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:55.032402    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:55.036736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:56.037433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:56.037433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.040359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:56.040359    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:56.040359    3816 type.go:168] "Request Body" body=""
	I1205 06:36:56.040359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.043162    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:57.043498    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:57.043941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:57.046650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:58.047193    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:58.047742    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:58.050545    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:59.051297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:59.051297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:59.054095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:00.054646    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:00.054646    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:00.057943    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:01.058170    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:01.058170    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:01.061024    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:02.061200    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:02.061200    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:02.064035    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:03.065365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:03.065365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:03.068662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:04.069784    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:04.070189    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:04.072456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:05.073381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:05.073381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:05.076559    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:06.076793    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:06.076793    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.079598    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:06.079598    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:06.079598    3816 type.go:168] "Request Body" body=""
	I1205 06:37:06.079598    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.082197    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:07.082493    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:07.082493    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:07.085205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:08.086412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:08.086412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:08.089713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:09.090483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:09.090483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:09.093906    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:10.094287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:10.094287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:10.097613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:11.097803    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:11.097803    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:11.101190    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:12.101619    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:12.101619    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:12.104634    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:13.104688    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:13.104688    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:13.108075    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:14.108856    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:14.109198    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:14.113007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:15.113918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:15.113918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:15.116912    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:16.117830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:16.117830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.121438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:16.121438    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:16.121438    3816 type.go:168] "Request Body" body=""
	I1205 06:37:16.121438    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.124099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:17.124588    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:17.124588    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:17.128092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:18.128319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:18.128319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:18.132513    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:19.132736    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:19.132736    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:19.135560    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:20.136515    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:20.136515    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:20.139792    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:21.140167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:21.140471    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:21.143328    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:22.144039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:22.144039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:22.146593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:23.147175    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:23.147543    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:23.150087    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:24.150247    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:24.150247    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:24.154118    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:25.154433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:25.154433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:25.157386    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:26.157568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:26.157568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.160472    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:26.160472    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:26.160472    3816 type.go:168] "Request Body" body=""
	I1205 06:37:26.161000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.162649    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:27.163417    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:27.163417    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:27.167106    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:28.167812    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:28.167812    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:28.170974    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:29.171418    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:29.171418    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:29.174717    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:30.174973    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:30.174973    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:30.179281    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:31.179472    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:31.179472    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:31.182137    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:32.182463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:32.182463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:32.185914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:33.186359    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:33.186359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:33.189745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:34.190102    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:34.190102    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:34.193507    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:35.194094    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:35.194094    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:35.197205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:36.197770    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:36.197770    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.200498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:36.200498    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:36.201020    3816 type.go:168] "Request Body" body=""
	I1205 06:37:36.201099    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.203111    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:37.204025    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:37.204025    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:37.207133    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:38.207447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:38.207447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:38.210787    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:39.211776    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:39.211776    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:39.213772    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:40.214710    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:40.214710    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:40.217616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:41.217767    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:41.217767    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:41.221200    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:42.221683    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:42.222132    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:42.224721    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:43.224982    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:43.224982    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:43.229361    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:44.230310    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:44.230310    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:44.233109    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:45.234073    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:45.234345    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:45.238600    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:46.238845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:46.238845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.242060    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:46.242126    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:46.242126    3816 type.go:168] "Request Body" body=""
	I1205 06:37:46.242126    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.244330    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:47.245532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:47.245532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:47.248646    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:48.249492    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:48.249786    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:48.252034    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:49.252532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:49.252532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:49.255984    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:50.256278    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:50.256278    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:50.260022    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:51.260850    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:51.260850    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:51.262856    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:52.263771    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:52.263771    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:52.266969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:53.267499    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:53.267499    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:53.270917    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:54.271483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:54.271483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:54.273932    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:55.274677    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:55.274677    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:55.277978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:56.278630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:56.278630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.281414    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:56.281414    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:56.281414    3816 type.go:168] "Request Body" body=""
	I1205 06:37:56.281414    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.283686    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:57.283878    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:57.283878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:57.286826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:58.287091    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:58.287091    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:58.290488    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:59.291169    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:59.291169    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:59.293886    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:00.294704    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:00.294704    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:00.297861    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:01.298572    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:01.298961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:01.301760    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:02.302048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:02.302048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:02.304517    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:03.305251    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:03.305251    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:03.307969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:04.308898    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:04.308898    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:04.312237    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:05.313053    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:05.313395    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:05.316566    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:06.316866    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:06.316866    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.319941    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:06.319941    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:06.319941    3816 type.go:168] "Request Body" body=""
	I1205 06:38:06.319941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.322349    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:07.322907    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:07.322907    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:07.325564    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:08.326123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:08.326123    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:08.329670    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:09.330047    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:09.330047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:09.333169    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:10.333628    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:10.333628    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:10.336729    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:11.337447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:11.337447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:11.341026    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:12.342590    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:12.342590    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:12.345509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:13.345779    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:13.345779    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:13.348736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:14.349699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:14.349699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:14.354811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:38:15.355125    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:15.355699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:15.358657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:16.358925    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:16.358925    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.362294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:16.362394    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:16.362515    3816 type.go:168] "Request Body" body=""
	I1205 06:38:16.362576    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.366638    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:17.367505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:17.367505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:17.370390    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:18.371098    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:18.371098    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:18.374694    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:19.375813    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:19.375813    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:19.378371    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:20.378981    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:20.378981    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:20.382504    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:21.382666    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:21.382666    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:21.386056    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:22.386435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:22.386435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:22.389942    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:23.390201    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:23.390201    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:23.394201    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:24.394754    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:24.394754    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:24.399451    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:25.400206    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:25.400654    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:25.403432    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:26.404412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:26.404412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.407565    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:26.407565    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:26.407565    3816 type.go:168] "Request Body" body=""
	I1205 06:38:26.407565    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.410520    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:27.410783    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:27.410783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:27.413528    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:28.415022    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:28.415022    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:28.418437    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:29.419313    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:29.419313    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:29.422536    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:30.423342    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:30.423497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:30.426178    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:31.426933    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:31.426933    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:31.430144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:32.430929    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:32.430929    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:32.434479    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:33.434863    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:33.434863    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:33.437682    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:34.437924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:34.437924    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:34.440945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:35.442134    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:35.442134    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:35.444908    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:36.445071    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:36.445071    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.448284    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:36.448309    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:36.448309    3816 type.go:168] "Request Body" body=""
	I1205 06:38:36.448309    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.450897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:37.451653    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:37.451944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:37.455778    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:38.456494    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:38.456494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:38.459476    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:39.459817    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:39.460047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:39.462801    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:40.464111    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:40.464111    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:40.467438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:41.468570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:41.468570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:41.471499    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:42.471858    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:42.471858    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:42.475786    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:43.476207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:43.476207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:43.479798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:44.480584    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:44.480584    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:44.482596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:45.483834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:45.483834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:45.488465    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:46.488899    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:46.488899    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.492762    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:46.492857    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:46.493009    3816 type.go:168] "Request Body" body=""
	I1205 06:38:46.493069    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.495877    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:47.496162    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:47.496162    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:47.499015    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:48.499326    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:48.499326    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:48.503120    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:49.503509    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:49.503509    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:49.506339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:50.507027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:50.507403    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:50.509404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:51.510410    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:51.510410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:51.513676    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:52.514297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:52.514297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:52.517647    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:53.517908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:53.517908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:53.520862    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:54.521180    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:54.521180    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:54.524895    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:55.526048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:55.526048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:55.529345    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:56.529859    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:56.529859    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.532804    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:38:56.532932    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:56.533087    3816 type.go:168] "Request Body" body=""
	I1205 06:38:56.533133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.534781    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:38:57.535534    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:57.535534    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:57.538765    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:58.538928    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:58.538928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:58.542189    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:59.542538    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:59.542538    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:59.545041    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:00.545961    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:00.545961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:00.549272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:01.550020    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:01.550020    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:01.553982    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:02.554834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:02.554834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:02.557878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:03.558082    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:03.558082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:03.560631    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:04.561450    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:04.561450    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:04.564816    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:05.565884    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:05.565884    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:05.568807    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:06.569924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:06.570101    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.573050    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:06.573172    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:06.573295    3816 type.go:168] "Request Body" body=""
	I1205 06:39:06.573378    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.577668    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:07.578044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:07.578044    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:07.580203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:08.581555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:08.581760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:08.584347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:09.585050    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:09.585050    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:09.587469    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:10.588187    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:10.588187    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:10.592992    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:11.593285    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:11.593285    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:11.596552    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:12.597368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:12.597368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:12.599206    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:39:13.600760    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:13.600760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:13.604095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:14.604815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:14.604815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:14.607416    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:15.607824    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:15.607824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:15.611182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:16.612388    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:16.612388    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.615128    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:16.615128    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:16.615128    3816 type.go:168] "Request Body" body=""
	I1205 06:39:16.615128    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.617381    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:17.617837    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:17.617837    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:17.621309    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:18.622420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:18.622420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:18.625659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:19.626064    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:19.626064    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:19.630047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:20.631021    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:20.631425    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:20.634272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:21.634593    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:21.634593    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:21.637617    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:22.638437    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:22.638928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:22.642027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:23.643026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:23.643026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:23.646144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:24.646864    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:24.647232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:24.650759    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:25.651017    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:25.651017    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:25.654375    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:26.655043    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:26.655043    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.658286    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:39:26.658286    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:26.658286    3816 type.go:168] "Request Body" body=""
	I1205 06:39:26.658286    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.660775    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:27.661714    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:27.661714    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:27.667334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:39:28.667862    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:28.667862    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:28.672081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:29.672167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:29.672167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:29.674745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:30.676280    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:30.676280    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:30.679395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:31.679835    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:31.679835    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:31.682978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:32.684077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:32.684077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:32.686823    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:33.687836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:33.687836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:33.691156    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:34.691521    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:34.691521    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:34.693937    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:35.694845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:35.694845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:35.698294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:36.699532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:36.699532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.702195    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:36.702717    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:36.702862    3816 type.go:168] "Request Body" body=""
	I1205 06:39:36.702916    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.706473    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:37.707504    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:37.707504    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:37.710813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:38.710939    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:38.711535    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:38.716232    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:39.717207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:39.717207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:39.720152    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:40.720331    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:40.720331    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:40.722990    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:41.723691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:41.723691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:41.726966    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:42.727268    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:42.727268    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:42.731157    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:43.731449    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:43.731449    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:43.733873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:44.734365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:44.734365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:44.737250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:45.738219    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:45.738219    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:45.741606    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:46.742116    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:46.742448    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.744702    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:46.745230    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:46.745415    3816 type.go:168] "Request Body" body=""
	I1205 06:39:46.745518    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.747577    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:47.748110    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:47.748110    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:47.751287    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:48.751998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:48.751998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:48.755225    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:49.756362    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:49.756362    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:49.758876    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:50.759512    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:50.759512    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:50.762228    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:51.762926    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:51.762926    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:51.766327    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:52.766951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:52.766951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:52.770535    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:53.771298    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:53.771298    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:53.774215    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:54.774580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:54.774580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:54.777547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:55.778421    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:55.778421    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:55.781650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:56.782155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:56.783007    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.785844    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:56.785844    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:56.785844    3816 type.go:168] "Request Body" body=""
	I1205 06:39:56.785844    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.788526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:57.788851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:57.788851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:57.791811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:58.792393    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:58.792393    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:58.796105    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:59.796407    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:59.796407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:59.799250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:00.799796    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:00.799796    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:00.803018    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:01.803711    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:01.803711    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:01.806363    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:02.806549    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:02.806979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:02.810046    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:03.810372    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:03.810808    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:03.813835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:04.814104    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:04.814104    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:04.817217    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:05.817542    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:05.817985    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:05.820814    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:06.821479    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:06.821479    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.825616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:06.825616    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:06.825616    3816 type.go:168] "Request Body" body=""
	I1205 06:40:06.825616    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.828168    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:07.828495    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:07.828495    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:07.831826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:08.832009    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:08.832009    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:08.834677    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:09.834944    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:09.834944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:09.838182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:10.838841    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:10.838841    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:10.842122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:11.842336    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:11.842336    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:11.845418    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:12.846381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:12.846722    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:12.849321    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:13.849671    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:13.850100    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:13.852968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:14.853642    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:14.853642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:14.856503    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:15.856908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:15.856908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:15.861027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:16.862019    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:16.862328    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.864135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:40:16.864135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:16.864135    3816 type.go:168] "Request Body" body=""
	I1205 06:40:16.864652    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.866384    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:17.867632    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:17.867632    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:17.870561    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:18.871085    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:18.871085    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:18.874523    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:19.874746    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:19.874746    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:19.877529    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:20.878119    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:20.878119    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:20.881395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:21.881716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:21.881716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:21.884145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:22.884876    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:22.884876    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:22.887889    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:23.888341    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:23.888494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:23.891334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:24.891830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:24.891830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:24.895547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:25.896077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:25.896077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:25.898755    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:26.899940    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:26.899940    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.903829    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:40:26.903925    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:26.904028    3816 type.go:168] "Request Body" body=""
	I1205 06:40:26.904082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.907442    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:27.907744    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:27.907744    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:27.911092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:28.911316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:28.911316    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:28.914347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:29.914739    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:29.914739    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:29.918366    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:30.918822    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:30.918822    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:30.921456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:31.922028    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:31.922028    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:31.925069    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:32.925330    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:32.925330    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:32.928779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:33.929376    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:33.929376    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:33.933212    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:34.933571    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:34.933571    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:34.936160    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:35.937442    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:35.937442    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:35.941103    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:36.941232    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:36.941232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.943558    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:36.943558    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:36.943558    3816 type.go:168] "Request Body" body=""
	I1205 06:40:36.943558    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.946031    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:37.946448    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:37.946847    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:37.949586    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:38.949756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:38.950157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:38.952901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:39.953375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:39.953783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:39.956248    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:40.957703    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:40.957703    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:40.960899    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:41.961836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:41.961836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:41.965167    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:42.965316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:42.965560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:42.968007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:43.968734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:43.968734    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:43.971410    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:44.972311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:44.972311    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:44.975433    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:45.976381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:45.976381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:45.981080    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:46.981463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:46.981463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.986037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:46.986125    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:46.986226    3816 type.go:168] "Request Body" body=""
	I1205 06:40:46.986226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.989122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:47.989324    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:47.989324    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:47.992720    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:48.992852    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:48.992852    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:48.995205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:49.995580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:49.995580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:49.998526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:50.998794    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:50.998794    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:51.001637    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:52.002658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:52.002658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:52.004968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:53.005044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:53.005445    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:53.008445    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:54.009089    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:54.009089    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:54.012447    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:55.012756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:55.012756    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:55.015364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:55.523386    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 06:40:55.523386    3816 node_ready.go:38] duration metric: took 6m0.0010607s for node "functional-247800" to be "Ready" ...
	I1205 06:40:55.527309    3816 out.go:203] 
	W1205 06:40:55.529851    3816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:40:55.529851    3816 out.go:285] * 
	W1205 06:40:55.531579    3816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:40:55.533404    3816 out.go:203] 
	
	
	==> Docker <==
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.520999227Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521005327Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521028530Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521065534Z" level=info msg="Initializing buildkit"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.631468044Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636567622Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725240Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636825651Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725440Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:34:51 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:51 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:34:52 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:34:52 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:43:06.317351   20623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:06.318343   20623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:06.320859   20623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:06.322802   20623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:06.323700   20623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001158] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001030] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001035] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000969] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000975] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:34] CPU: 4 PID: 56451 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000864] RIP: 0033:0x7f46c5e3eb20
	[  +0.000406] Code: Unable to access opcode bytes at RIP 0x7f46c5e3eaf6.
	[  +0.000950] RSP: 002b:00007fff1eb3d7e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001108] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001199] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000983] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000845] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000799] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000884] FS:  0000000000000000 GS:  0000000000000000
	[  +0.829311] CPU: 0 PID: 56573 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000781] RIP: 0033:0x7f241df52b20
	[  +0.000533] Code: Unable to access opcode bytes at RIP 0x7f241df52af6.
	[  +0.000663] RSP: 002b:00007ffded7fa4e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000781] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:43:06 up  2:16,  0 user,  load average: 0.42, 0.39, 0.59
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:43:02 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:03 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 989.
	Dec 05 06:43:03 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:03 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:03 functional-247800 kubelet[20454]: E1205 06:43:03.727468   20454 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:03 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:03 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:04 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 990.
	Dec 05 06:43:04 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:04 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:04 functional-247800 kubelet[20465]: E1205 06:43:04.498371   20465 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:04 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:04 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:05 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 991.
	Dec 05 06:43:05 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:05 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:05 functional-247800 kubelet[20495]: E1205 06:43:05.256882   20495 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:05 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:05 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:05 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 992.
	Dec 05 06:43:05 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:05 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:05 functional-247800 kubelet[20594]: E1205 06:43:05.979942   20594 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:05 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:05 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (622.4895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (54.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (54.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-247800 get pods
functional_test.go:756: (dbg) Non-zero exit: out\kubectl.exe --context functional-247800 get pods: exit status 1 (50.5468531s)

                                                
                                                
** stderr ** 
	E1205 06:43:18.256285    9172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:43:28.347785    9172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:43:38.388096    9172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:43:48.429181    9172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:43:58.468935    9172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out\\kubectl.exe --context functional-247800 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (743.0834ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.5444511s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-088800 image ls --format short --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format yaml --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ ssh     │ functional-088800 ssh pgrep buildkitd                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ image   │ functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr                  │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls                                                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format json --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format table --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ delete  │ -p functional-088800                                                                                                    │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:26 UTC │
	│ start   │ -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:26 UTC │                     │
	│ start   │ -p functional-247800 --alsologtostderr -v=8                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:34 UTC │                     │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:41 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:latest                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add minikube-local-cache-test:functional-247800                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache delete minikube-local-cache-test:functional-247800                                              │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl images                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ cache   │ functional-247800 cache reload                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ kubectl │ functional-247800 kubectl -- --context functional-247800 get pods                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:34:44
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:34:43.990318    3816 out.go:360] Setting OutFile to fd 932 ...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.034404    3816 out.go:374] Setting ErrFile to fd 1564...
	I1205 06:34:44.034404    3816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:34:44.048005    3816 out.go:368] Setting JSON to false
	I1205 06:34:44.051134    3816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7741,"bootTime":1764908742,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:34:44.051134    3816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:34:44.054997    3816 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:34:44.057041    3816 notify.go:221] Checking for updates...
	I1205 06:34:44.057041    3816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:44.060615    3816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:34:44.063386    3816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:34:44.065338    3816 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:34:44.068100    3816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:34:44.070765    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:44.071546    3816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:34:44.185014    3816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:34:44.190117    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.434951    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.415349563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.438948    3816 out.go:179] * Using the docker driver based on existing profile
	I1205 06:34:44.442716    3816 start.go:309] selected driver: docker
	I1205 06:34:44.442716    3816 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.442716    3816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:34:44.449451    3816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:34:44.693650    3816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:34:44.673163701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:34:44.776708    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:44.776708    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:44.776708    3816 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:44.779353    3816 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:34:44.789396    3816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:34:44.793121    3816 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:34:44.794774    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:44.794774    3816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:34:44.844630    3816 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:44.871213    3816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:34:44.871213    3816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:34:45.153466    3816 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:34:45.154472    3816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:34:45.154472    3816 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:34:45.156762    3816 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:34:45.156819    3816 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:45.157157    3816 start.go:364] duration metric: took 122.3µs to acquireMachinesLock for "functional-247800"
	I1205 06:34:45.157157    3816 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:34:45.157157    3816 fix.go:54] fixHost starting: 
	I1205 06:34:45.165313    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:45.243648    3816 fix.go:112] recreateIfNeeded on functional-247800: state=Running err=<nil>
	W1205 06:34:45.243648    3816 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:34:45.267762    3816 out.go:252] * Updating the running docker "functional-247800" container ...
	I1205 06:34:45.269766    3816 machine.go:94] provisionDockerMachine start ...
	I1205 06:34:45.274766    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:45.449049    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:45.449049    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:45.449049    3816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:34:45.686505    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:45.686505    3816 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:34:45.691507    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:46.703091    3816 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800: (1.0115691s)
	I1205 06:34:46.706016    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:46.706016    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:46.706016    3816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:34:47.035712    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:34:47.042684    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.107199    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:47.107199    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:47.107199    3816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:34:47.308149    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:47.308197    3816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:34:47.308318    3816 ubuntu.go:190] setting up certificates
	I1205 06:34:47.308318    3816 provision.go:84] configureAuth start
	I1205 06:34:47.315253    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:47.380504    3816 provision.go:143] copyHostCerts
	I1205 06:34:47.381517    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:34:47.381517    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:34:47.381517    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:34:47.382508    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:34:47.382508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:34:47.382508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:34:47.383507    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:34:47.384508    3816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:34:47.384508    3816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:34:47.385507    3816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:34:47.573727    3816 provision.go:177] copyRemoteCerts
	I1205 06:34:47.580429    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:34:47.585428    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.664000    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:47.815162    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1205 06:34:47.815801    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:34:47.849954    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1205 06:34:47.850956    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:34:47.876175    3816 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.876248    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:34:47.876248    3816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.7217371s
	I1205 06:34:47.876248    3816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:34:47.883801    3816 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.883881    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:34:47.883881    3816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.72937s
	I1205 06:34:47.883881    3816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:34:47.908586    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1205 06:34:47.909421    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:34:47.925048    3816 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.925345    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:34:47.925345    3816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.7708333s
	I1205 06:34:47.925345    3816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:34:47.926059    3816 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.926059    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:34:47.926059    3816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.7715471s
	I1205 06:34:47.926059    3816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:34:47.936781    3816 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.937442    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:34:47.937555    3816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.7830428s
	I1205 06:34:47.937609    3816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:34:47.946154    3816 provision.go:87] duration metric: took 637.8269ms to configureAuth
	I1205 06:34:47.946231    3816 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:34:47.946358    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:47.951931    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:47.990646    3816 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:47.990646    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:34:47.991641    3816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.8371282s
	I1205 06:34:47.991641    3816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:34:48.007838    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.008431    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.008476    3816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:34:48.018898    3816 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.018898    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:34:48.018898    3816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.8643851s
	I1205 06:34:48.018898    3816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:34:48.061664    3816 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:34:48.062004    3816 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:34:48.062141    3816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9076274s
	I1205 06:34:48.062141    3816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:34:48.062198    3816 cache.go:87] Successfully saved all images to host disk.
	I1205 06:34:48.196159    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:34:48.196159    3816 ubuntu.go:71] root file system type: overlay
	I1205 06:34:48.196159    3816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:34:48.200167    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.256431    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.257239    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.257347    3816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:34:48.462598    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:34:48.466014    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.522845    3816 main.go:143] libmachine: Using SSH client type: native
	I1205 06:34:48.523383    3816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff74787ea80] 0x7ff7478815e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:34:48.523415    3816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:34:48.714113    3816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:34:48.714641    3816 machine.go:97] duration metric: took 3.444826s to provisionDockerMachine
	I1205 06:34:48.714700    3816 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:34:48.714747    3816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:34:48.721762    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:34:48.726053    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:48.800573    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:48.947188    3816 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:34:48.954494    3816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_ID="12"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:34:48.954494    3816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:34:48.954494    3816 command_runner.go:130] > ID=debian
	I1205 06:34:48.954494    3816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:34:48.954494    3816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:34:48.955010    3816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:34:48.955099    3816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:34:48.955099    3816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:34:48.955143    3816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:34:48.955806    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:34:48.955806    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /etc/ssl/certs/80362.pem
	I1205 06:34:48.956436    3816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:34:48.956436    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> /etc/test/nested/copy/8036/hosts
	I1205 06:34:48.960827    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:34:48.973199    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:34:49.002014    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:34:49.027943    3816 start.go:296] duration metric: took 313.2383ms for postStartSetup
	I1205 06:34:49.031806    3816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:34:49.035611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.090476    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.213008    3816 command_runner.go:130] > 1%
	I1205 06:34:49.217907    3816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:34:49.227048    3816 command_runner.go:130] > 950G
	I1205 06:34:49.227093    3816 fix.go:56] duration metric: took 4.0698775s for fixHost
	I1205 06:34:49.227184    3816 start.go:83] releasing machines lock for "functional-247800", held for 4.069942s
	I1205 06:34:49.230591    3816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:34:49.286648    3816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:34:49.290773    3816 ssh_runner.go:195] Run: cat /version.json
	I1205 06:34:49.290773    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.294768    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:49.346982    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.347419    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:49.463868    3816 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:34:49.468593    3816 ssh_runner.go:195] Run: systemctl --version
	I1205 06:34:49.473361    3816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1205 06:34:49.473361    3816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:34:49.482411    3816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:34:49.482411    3816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:34:49.486655    3816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:34:49.495075    3816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:34:49.495101    3816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:34:49.499557    3816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:34:49.512091    3816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:34:49.512091    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:49.512091    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:49.512091    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:49.534248    3816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:34:49.538479    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:34:49.557417    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:34:49.572725    3816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:34:49.577000    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1205 06:34:49.583562    3816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:34:49.583562    3816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:34:49.600012    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.618632    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:34:49.636357    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:34:49.654641    3816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:34:49.675114    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:34:49.696597    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:34:49.715167    3816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:34:49.738213    3816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:34:49.750303    3816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:34:49.754900    3816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:34:49.771255    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:49.909849    3816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:34:50.068262    3816 start.go:496] detecting cgroup driver to use...
	I1205 06:34:50.068262    3816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:34:50.073308    3816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:34:50.092739    3816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1205 06:34:50.092785    3816 command_runner.go:130] > [Unit]
	I1205 06:34:50.092785    3816 command_runner.go:130] > Description=Docker Application Container Engine
	I1205 06:34:50.092785    3816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1205 06:34:50.092828    3816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1205 06:34:50.092828    3816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1205 06:34:50.092828    3816 command_runner.go:130] > Requires=docker.socket
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitBurst=3
	I1205 06:34:50.092828    3816 command_runner.go:130] > StartLimitIntervalSec=60
	I1205 06:34:50.092884    3816 command_runner.go:130] > [Service]
	I1205 06:34:50.092884    3816 command_runner.go:130] > Type=notify
	I1205 06:34:50.092884    3816 command_runner.go:130] > Restart=always
	I1205 06:34:50.092919    3816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1205 06:34:50.092943    3816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1205 06:34:50.092943    3816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1205 06:34:50.092943    3816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1205 06:34:50.092943    3816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1205 06:34:50.092943    3816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1205 06:34:50.092943    3816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1205 06:34:50.092943    3816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1205 06:34:50.092943    3816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNOFILE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitNPROC=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > LimitCORE=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1205 06:34:50.092943    3816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1205 06:34:50.092943    3816 command_runner.go:130] > TasksMax=infinity
	I1205 06:34:50.092943    3816 command_runner.go:130] > TimeoutStartSec=0
	I1205 06:34:50.092943    3816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1205 06:34:50.092943    3816 command_runner.go:130] > Delegate=yes
	I1205 06:34:50.092943    3816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1205 06:34:50.092943    3816 command_runner.go:130] > KillMode=process
	I1205 06:34:50.092943    3816 command_runner.go:130] > OOMScoreAdjust=-500
	I1205 06:34:50.092943    3816 command_runner.go:130] > [Install]
	I1205 06:34:50.092943    3816 command_runner.go:130] > WantedBy=multi-user.target
	I1205 06:34:50.097721    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.125496    3816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:34:50.186929    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:34:50.209805    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:34:50.227504    3816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:34:50.252330    3816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1205 06:34:50.256641    3816 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:34:50.264328    3816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1205 06:34:50.269234    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:34:50.282005    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:34:50.306573    3816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:34:50.447619    3816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:34:50.580607    3816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:34:50.581126    3816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:34:50.605071    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:34:50.630349    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:50.782135    3816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:34:51.643866    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:34:51.667031    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:34:51.689935    3816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 06:34:51.715903    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:51.740104    3816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:34:51.897148    3816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:34:52.038509    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.188129    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:34:52.216759    3816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:34:52.241711    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:52.388958    3816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:34:52.491038    3816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:34:52.508998    3816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:34:52.514460    3816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:34:52.523944    3816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1205 06:34:52.524474    3816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:34:52.524548    3816 command_runner.go:130] > Device: 0,112	Inode: 1756        Links: 1
	I1205 06:34:52.524589    3816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1205 06:34:52.524606    3816 command_runner.go:130] > Access: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524642    3816 command_runner.go:130] > Modify: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] > Change: 2025-12-05 06:34:52.399314148 +0000
	I1205 06:34:52.524689    3816 command_runner.go:130] >  Birth: -
	I1205 06:34:52.524737    3816 start.go:564] Will wait 60s for crictl version
	I1205 06:34:52.529361    3816 ssh_runner.go:195] Run: which crictl
	I1205 06:34:52.536028    3816 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:34:52.539850    3816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:34:52.581379    3816 command_runner.go:130] > Version:  0.1.0
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeName:  docker
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeVersion:  29.0.4
	I1205 06:34:52.581379    3816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:34:52.581379    3816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:34:52.585592    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.624737    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.628712    3816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:34:52.665154    3816 command_runner.go:130] > 29.0.4
	I1205 06:34:52.668797    3816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:34:52.672375    3816 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:34:52.798876    3816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:34:52.801876    3816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:34:52.809731    3816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1205 06:34:52.813378    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:52.870537    3816 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:34:52.870721    3816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:34:52.873969    3816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1205 06:34:52.909019    3816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1205 06:34:52.909019    3816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:52.909019    3816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 06:34:52.909019    3816 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:34:52.909019    3816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:34:52.909019    3816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:34:52.913141    3816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:34:52.986014    3816 command_runner.go:130] > cgroupfs
	I1205 06:34:52.986014    3816 cni.go:84] Creating CNI manager for ""
	I1205 06:34:52.986014    3816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:34:52.986014    3816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:34:52.986014    3816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:34:52.986014    3816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:34:52.990595    3816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubeadm
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubectl
	I1205 06:34:53.003509    3816 command_runner.go:130] > kubelet
	I1205 06:34:53.003509    3816 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:34:53.008042    3816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:34:53.020762    3816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:34:53.041328    3816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:34:53.061676    3816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1205 06:34:53.085180    3816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:34:53.093591    3816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:34:53.098459    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:53.247095    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:53.952452    3816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:34:53.952558    3816 certs.go:195] generating shared ca certs ...
	I1205 06:34:53.952558    3816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:53.953085    3816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:34:53.953228    3816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:34:53.953228    3816 certs.go:257] generating profile certs ...
	I1205 06:34:53.954037    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:34:53.954334    3816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:34:53.954527    3816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:34:53.954527    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:34:53.954631    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:34:53.954814    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:34:53.954910    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:34:53.954973    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:34:53.955045    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:34:53.955116    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:34:53.955223    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:34:53.955290    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:34:53.955826    3816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:34:53.955954    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:34:53.956129    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:34:53.956372    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:34:53.956912    3816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:34:53.957083    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem -> /usr/share/ca-certificates/8036.pem
	I1205 06:34:53.957119    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> /usr/share/ca-certificates/80362.pem
	I1205 06:34:53.957269    3816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:53.958214    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:34:53.988313    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:34:54.013387    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:34:54.046063    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:34:54.077041    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:34:54.105745    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:34:54.131011    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:34:54.161212    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:34:54.186054    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:34:54.215522    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:34:54.241991    3816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:34:54.271902    3816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:34:54.296449    3816 ssh_runner.go:195] Run: openssl version
	I1205 06:34:54.306573    3816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:34:54.311042    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.336884    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:34:54.353148    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.362688    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.366452    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:34:54.412489    3816 command_runner.go:130] > 3ec20f2e
	I1205 06:34:54.416608    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:34:54.434824    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.453553    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:34:54.472739    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481910    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.481979    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.485785    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:34:54.529492    3816 command_runner.go:130] > b5213941
	I1205 06:34:54.534432    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:34:54.550655    3816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.568891    3816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:34:54.588631    3816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.603145    3816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.607947    3816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:34:54.650843    3816 command_runner.go:130] > 51391683
	I1205 06:34:54.656334    3816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:34:54.673967    3816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.682495    3816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:34:54.683019    3816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:34:54.683019    3816 command_runner.go:130] > Device: 8,48	Inode: 15231       Links: 1
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:34:54.683019    3816 command_runner.go:130] > Access: 2025-12-05 06:30:39.655512939 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Modify: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] > Change: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.683019    3816 command_runner.go:130] >  Birth: 2025-12-05 06:26:37.208271977 +0000
	I1205 06:34:54.687561    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:34:54.732319    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.737009    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:34:54.781446    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.785553    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:34:54.831869    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.837267    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:34:54.879433    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.883677    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:34:54.927800    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.932770    3816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:34:54.976702    3816 command_runner.go:130] > Certificate will not expire
	I1205 06:34:54.977317    3816 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:34:54.981646    3816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:34:55.016824    3816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:34:55.029851    3816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:34:55.029915    3816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:34:55.029954    3816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:34:55.029954    3816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:34:55.034067    3816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:34:55.049954    3816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:34:55.054431    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.105351    3816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-247800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.105351    3816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-247800" cluster setting kubeconfig missing "functional-247800" context setting]
	I1205 06:34:55.106335    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.121466    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.122042    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:34:55.123267    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:34:55.123267    3816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:34:55.127724    3816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:34:55.143728    3816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 06:34:55.143728    3816 kubeadm.go:602] duration metric: took 113.7728ms to restartPrimaryControlPlane
	I1205 06:34:55.143728    3816 kubeadm.go:403] duration metric: took 166.4081ms to StartCluster
	I1205 06:34:55.143728    3816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.143728    3816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.145169    3816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:34:55.145829    3816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 06:34:55.145829    3816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:34:55.145829    3816 addons.go:70] Setting storage-provisioner=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:70] Setting default-storageclass=true in profile "functional-247800"
	I1205 06:34:55.145829    3816 addons.go:239] Setting addon storage-provisioner=true in "functional-247800"
	I1205 06:34:55.145829    3816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-247800"
	I1205 06:34:55.145829    3816 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:34:55.145829    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.153665    3816 out.go:179] * Verifying Kubernetes components...
	I1205 06:34:55.154863    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.158249    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.163403    3816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:34:55.210939    3816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:34:55.211668    3816 kapi.go:59] client config for functional-247800: &rest.Config{Host:"https://127.0.0.1:55398", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff749817340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:34:55.212897    3816 addons.go:239] Setting addon default-storageclass=true in "functional-247800"
	I1205 06:34:55.212990    3816 host.go:66] Checking if "functional-247800" exists ...
	I1205 06:34:55.213105    3816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:34:55.217433    3816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:34:55.222787    3816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.222787    3816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:34:55.224705    3816 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:34:55.226041    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.278804    3816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.278804    3816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:34:55.278889    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.282998    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.334515    3816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:34:55.337518    3816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:34:55.430551    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.457611    3816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:34:55.475848    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.517112    3816 node_ready.go:35] waiting up to 6m0s for node "functional-247800" to be "Ready" ...
	I1205 06:34:55.517112    3816 type.go:168] "Request Body" body=""
	I1205 06:34:55.517112    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:55.519131    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:55.528125    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.578790    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.578790    3816 retry.go:31] will retry after 337.958227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.602029    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.605442    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.605442    3816 retry.go:31] will retry after 279.867444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.890357    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:55.921657    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:55.969614    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:55.974371    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:55.974371    3816 retry.go:31] will retry after 509.000816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.006071    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.010642    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.010642    3816 retry.go:31] will retry after 471.064759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.487937    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:56.489162    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:56.520264    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:56.520264    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:56.523343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:56.575976    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 407.043808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:56.579606    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.579606    3816 retry.go:31] will retry after 638.604661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:56.992080    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.065952    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.069179    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.069179    3816 retry.go:31] will retry after 488.646188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.223461    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.294874    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.299418    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.299514    3816 retry.go:31] will retry after 602.819042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.524155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:57.524155    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:57.527278    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:57.562706    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:57.639333    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.644388    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.644388    3816 retry.go:31] will retry after 1.399464773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.907870    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:57.981775    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:57.984813    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:57.984921    3816 retry.go:31] will retry after 1.652361939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:58.527501    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:58.527501    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:58.529897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:34:59.050453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:34:59.133420    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.139944    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.139944    3816 retry.go:31] will retry after 1.645340531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.530709    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:34:59.530709    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:34:59.534391    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:34:59.642381    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:34:59.718427    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:34:59.721834    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:34:59.721834    3816 retry.go:31] will retry after 2.46016532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.534639    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:00.534639    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:00.541150    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:35:00.790675    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:00.867216    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:00.867216    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:00.867216    3816 retry.go:31] will retry after 3.092416499s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:01.541435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:01.541435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:01.544716    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:02.187405    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:02.268020    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:02.273203    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.273203    3816 retry.go:31] will retry after 2.104673669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:02.544980    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:02.544980    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:02.548584    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:03.548839    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:03.548839    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:03.553516    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:03.966453    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:04.049450    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.054065    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.054065    3816 retry.go:31] will retry after 2.461370012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.382944    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:04.458068    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:04.461488    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.461488    3816 retry.go:31] will retry after 4.66223575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:04.554680    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:04.555045    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:04.559246    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:05.559799    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:05.560272    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.563266    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:05.563380    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:05.563407    3816 type.go:168] "Request Body" body=""
	I1205 06:35:05.563407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:05.565659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.521322    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:06.565857    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:06.565857    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:06.569356    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:06.601193    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:06.606428    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:06.606428    3816 retry.go:31] will retry after 3.326595593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:07.570311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:07.570658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:07.572699    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:08.573282    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:08.573282    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:08.576531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:09.129039    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:09.217404    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:09.217937    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.217937    3816 retry.go:31] will retry after 6.891085945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:09.577333    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:09.577333    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:09.580146    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:09.938122    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:10.010022    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:10.013513    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.013513    3816 retry.go:31] will retry after 11.942280673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:10.581103    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:10.581488    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:10.585509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:11.586198    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:11.586569    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:11.589434    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:12.589851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:12.589851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:12.594400    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:13.595039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:13.595039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:13.598596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:14.599060    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:14.599060    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:14.601840    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:15.602885    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:15.602885    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.605878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:15.605878    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:15.605878    3816 type.go:168] "Request Body" body=""
	I1205 06:35:15.605878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:15.608593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:16.114246    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:16.191406    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:16.193997    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.193997    3816 retry.go:31] will retry after 14.066483079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:16.609000    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:16.609000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:16.611991    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:17.612458    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:17.612996    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:17.617813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:18.618806    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:18.618806    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:18.622265    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:19.623287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:19.623287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:19.627037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:20.627291    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:20.627658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:20.630318    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:21.630930    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:21.630930    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:21.635020    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:21.963392    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:22.044084    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:22.048902    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.048902    3816 retry.go:31] will retry after 11.169519715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:22.635453    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:22.635453    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:22.638251    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:23.639335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:23.639335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:23.642113    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:24.642790    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:24.642790    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:24.645713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:25.646115    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:25.646115    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.649594    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:25.649594    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:25.649594    3816 type.go:168] "Request Body" body=""
	I1205 06:35:25.649594    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:25.652081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:26.652283    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:26.652283    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:26.656196    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:27.656951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:27.656951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:27.660911    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:28.661511    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:28.661511    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:28.665811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:29.666123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:29.666562    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:29.669285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:30.265388    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:30.346699    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:30.350211    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.350747    3816 retry.go:31] will retry after 20.097178843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:30.669645    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:30.669645    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:30.673744    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:31.674027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:31.674411    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:31.676873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:32.677707    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:32.677707    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:32.680779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:33.224337    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:33.301595    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:33.304702    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.304702    3816 retry.go:31] will retry after 17.498614608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:33.681368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:33.681368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:33.685247    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:34.685570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:34.685570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:34.689019    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:35.689478    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:35.689478    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.693423    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:35.693478    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:35.693605    3816 type.go:168] "Request Body" body=""
	I1205 06:35:35.693728    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:35.697203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:36.697741    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:36.697741    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:36.700841    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:37.701712    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:37.701712    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:37.705613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:38.706497    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:38.706497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:38.709240    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:39.710263    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:39.710263    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:39.714262    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:40.714574    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:40.714574    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:40.717659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:41.717815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:41.717815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:41.720914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:42.722129    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:42.722129    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:42.725427    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:43.726728    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:43.727083    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:43.729850    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:44.730383    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:44.730383    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:44.733852    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:45.735220    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:45.735642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.738135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:35:45.738135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:45.738135    3816 type.go:168] "Request Body" body=""
	I1205 06:35:45.738135    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:45.740498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:46.740699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:46.740699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:46.744820    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:35:47.745629    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:47.746108    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:47.748477    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:48.749130    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:48.749130    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:48.752304    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:49.753459    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:49.753860    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:49.756462    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:50.453778    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:35:50.536078    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.536601    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.536601    3816 retry.go:31] will retry after 10.835620015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.756979    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:50.756979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:50.760402    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:50.808292    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:35:50.896096    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:35:50.901180    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:50.901180    3816 retry.go:31] will retry after 25.940426602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:35:51.761349    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:51.761349    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:51.763343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:35:52.765295    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:52.765295    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:52.768404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:53.769128    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:53.769490    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:53.773090    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:54.773373    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:54.773373    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:54.776047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:55.776319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:55.776319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.779826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:35:55.779933    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:35:55.780038    3816 type.go:168] "Request Body" body=""
	I1205 06:35:55.780038    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:55.782548    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:56.782984    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:56.782984    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:56.786482    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:57.787420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:57.787420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:57.791145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:35:58.791893    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:58.792215    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:58.795191    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:35:59.795792    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:35:59.795792    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:35:59.798496    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:00.799902    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:00.800226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:00.803690    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:01.377212    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:01.460054    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:01.465324    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.465324    3816 retry.go:31] will retry after 27.628572595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:01.803905    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:01.803905    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:01.806773    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:02.807252    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:02.807252    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:02.809866    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:03.810536    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:03.810536    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:03.813578    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:04.814042    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:04.814042    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:04.817276    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:05.818288    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:05.818679    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.821810    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:05.821891    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:05.821987    3816 type.go:168] "Request Body" body=""
	I1205 06:36:05.821987    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:05.824311    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:06.824568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:06.824568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:06.828662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:07.829627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:07.829627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:07.832420    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:08.833221    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:08.833221    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:08.837155    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:09.838074    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:09.838074    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:09.841184    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:10.842375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:10.842375    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:10.844946    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:11.846051    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:11.846051    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:11.849339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:12.849998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:12.850423    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:12.852739    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:13.853070    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:13.853070    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:13.856576    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:14.857697    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:14.857697    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:14.863183    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:15.864368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:15.864368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.868275    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:15.868370    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:15.868414    3816 type.go:168] "Request Body" body=""
	I1205 06:36:15.868524    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:15.870901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.847285    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:16.871649    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:16.871961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:16.873985    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:16.928128    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:16.933236    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:16.933236    3816 retry.go:31] will retry after 34.477637514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:36:17.875167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:17.875167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:17.879555    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:18.879691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:18.879691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:18.882703    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:19.883482    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:19.883482    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:19.886835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:20.887694    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:20.887694    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:20.890798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:21.891367    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:21.891367    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:21.894170    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:22.894555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:22.894555    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:22.898343    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:23.898560    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:23.898560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:23.901633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:24.902026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:24.902026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:24.905116    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:25.905658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:25.905658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.908458    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:25.908570    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:25.908723    3816 type.go:168] "Request Body" body=""
	I1205 06:36:25.908723    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:25.911359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:26.911630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:26.911630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:26.915364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:27.916524    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:27.916824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:27.919661    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:28.920716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:28.920716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:28.923642    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:29.100195    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:36:29.179813    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.183920    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:29.184562    3816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:29.924461    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:29.924461    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:29.927800    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:30.928583    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:30.928583    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:30.931166    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:31.931918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:31.931918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:31.935633    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:32.936157    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:32.936157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:32.939359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:33.939769    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:33.939769    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:33.943624    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:34.944004    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:34.944410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:34.946809    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:35.948067    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:35.948397    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.951285    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:35.951285    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:35.951859    3816 type.go:168] "Request Body" body=""
	I1205 06:36:35.951913    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:35.956062    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:36.956335    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:36.956335    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:36.959382    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:37.959668    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:37.959668    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:37.962651    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:38.963737    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:38.963737    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:38.967065    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:39.967557    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:39.967557    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:39.970531    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:40.970718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:40.970718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:40.974099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:41.974734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:41.975168    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:41.977669    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:42.977960    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:42.977960    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:42.981583    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:43.982240    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:43.982240    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:43.985849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:44.986627    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:44.986627    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:44.989945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:45.990505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:45.990505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.993980    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:36:45.994070    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:45.994133    3816 type.go:168] "Request Body" body=""
	I1205 06:36:45.994133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:45.996849    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:46.997191    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:46.997191    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:47.002502    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:36:48.002840    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:48.003305    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:48.006657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:49.007253    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:49.007253    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:49.011209    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:50.011465    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:50.011889    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:50.014740    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:51.015805    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:51.015805    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:51.019618    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:51.417352    3816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:36:51.854034    3816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:36:51.861704    3816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:36:51.865604    3816 out.go:179] * Enabled addons: 
	I1205 06:36:51.868880    3816 addons.go:530] duration metric: took 1m56.7213702s for enable addons: enabled=[]
	I1205 06:36:52.020718    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:52.020718    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:52.023235    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:53.023539    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:53.023927    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:53.026996    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:54.027998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:54.027998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:54.032187    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:36:55.032402    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:55.032402    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:55.036736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:36:56.037433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:56.037433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.040359    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:36:56.040359    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:36:56.040359    3816 type.go:168] "Request Body" body=""
	I1205 06:36:56.040359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:56.043162    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:57.043498    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:57.043941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:57.046650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:58.047193    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:58.047742    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:58.050545    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:36:59.051297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:36:59.051297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:36:59.054095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:00.054646    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:00.054646    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:00.057943    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:01.058170    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:01.058170    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:01.061024    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:02.061200    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:02.061200    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:02.064035    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:03.065365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:03.065365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:03.068662    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:04.069784    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:04.070189    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:04.072456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:05.073381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:05.073381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:05.076559    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:06.076793    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:06.076793    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.079598    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:06.079598    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:06.079598    3816 type.go:168] "Request Body" body=""
	I1205 06:37:06.079598    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:06.082197    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:07.082493    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:07.082493    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:07.085205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:08.086412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:08.086412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:08.089713    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:09.090483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:09.090483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:09.093906    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:10.094287    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:10.094287    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:10.097613    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:11.097803    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:11.097803    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:11.101190    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:12.101619    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:12.101619    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:12.104634    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:13.104688    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:13.104688    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:13.108075    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:14.108856    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:14.109198    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:14.113007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:15.113918    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:15.113918    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:15.116912    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:16.117830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:16.117830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.121438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:16.121438    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:16.121438    3816 type.go:168] "Request Body" body=""
	I1205 06:37:16.121438    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:16.124099    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:17.124588    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:17.124588    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:17.128092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:18.128319    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:18.128319    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:18.132513    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:19.132736    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:19.132736    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:19.135560    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:20.136515    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:20.136515    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:20.139792    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:21.140167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:21.140471    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:21.143328    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:22.144039    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:22.144039    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:22.146593    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:23.147175    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:23.147543    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:23.150087    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:24.150247    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:24.150247    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:24.154118    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:25.154433    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:25.154433    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:25.157386    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:26.157568    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:26.157568    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.160472    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:26.160472    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:26.160472    3816 type.go:168] "Request Body" body=""
	I1205 06:37:26.161000    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:26.162649    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:27.163417    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:27.163417    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:27.167106    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:28.167812    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:28.167812    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:28.170974    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:29.171418    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:29.171418    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:29.174717    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:30.174973    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:30.174973    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:30.179281    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:31.179472    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:31.179472    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:31.182137    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:32.182463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:32.182463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:32.185914    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:33.186359    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:33.186359    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:33.189745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:34.190102    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:34.190102    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:34.193507    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:35.194094    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:35.194094    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:35.197205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:36.197770    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:36.197770    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.200498    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:36.200498    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:36.201020    3816 type.go:168] "Request Body" body=""
	I1205 06:37:36.201099    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:36.203111    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:37.204025    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:37.204025    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:37.207133    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:38.207447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:38.207447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:38.210787    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:39.211776    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:39.211776    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:39.213772    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:37:40.214710    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:40.214710    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:40.217616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:41.217767    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:41.217767    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:41.221200    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:42.221683    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:42.222132    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:42.224721    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:43.224982    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:43.224982    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:43.229361    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:44.230310    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:44.230310    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:44.233109    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:45.234073    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:45.234345    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:45.238600    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:37:46.238845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:46.238845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.242060    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:37:46.242126    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:46.242126    3816 type.go:168] "Request Body" body=""
	I1205 06:37:46.242126    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:46.244330    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:47.245532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:47.245532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:47.248646    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:48.249492    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:48.249786    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:48.252034    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:49.252532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:49.252532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:49.255984    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:50.256278    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:50.256278    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:50.260022    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:51.260850    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:51.260850    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:51.262856    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:52.263771    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:52.263771    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:52.266969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:53.267499    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:53.267499    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:53.270917    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:54.271483    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:54.271483    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:54.273932    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:55.274677    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:55.274677    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:55.277978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:37:56.278630    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:56.278630    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.281414    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:37:56.281414    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:37:56.281414    3816 type.go:168] "Request Body" body=""
	I1205 06:37:56.281414    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:56.283686    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:57.283878    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:57.283878    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:57.286826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:58.287091    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:58.287091    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:58.290488    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:37:59.291169    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:37:59.291169    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:37:59.293886    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:00.294704    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:00.294704    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:00.297861    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:01.298572    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:01.298961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:01.301760    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:02.302048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:02.302048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:02.304517    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:03.305251    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:03.305251    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:03.307969    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:04.308898    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:04.308898    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:04.312237    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:05.313053    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:05.313395    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:05.316566    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:06.316866    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:06.316866    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.319941    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:06.319941    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:06.319941    3816 type.go:168] "Request Body" body=""
	I1205 06:38:06.319941    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:06.322349    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:07.322907    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:07.322907    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:07.325564    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:08.326123    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:08.326123    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:08.329670    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:09.330047    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:09.330047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:09.333169    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:10.333628    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:10.333628    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:10.336729    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:11.337447    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:11.337447    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:11.341026    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:12.342590    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:12.342590    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:12.345509    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:13.345779    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:13.345779    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:13.348736    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:14.349699    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:14.349699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:14.354811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:38:15.355125    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:15.355699    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:15.358657    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:16.358925    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:16.358925    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.362294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:16.362394    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:16.362515    3816 type.go:168] "Request Body" body=""
	I1205 06:38:16.362576    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:16.366638    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:17.367505    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:17.367505    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:17.370390    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:18.371098    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:18.371098    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:18.374694    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:19.375813    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:19.375813    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:19.378371    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:20.378981    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:20.378981    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:20.382504    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:21.382666    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:21.382666    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:21.386056    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:22.386435    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:22.386435    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:22.389942    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:23.390201    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:23.390201    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:23.394201    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:24.394754    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:24.394754    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:24.399451    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:25.400206    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:25.400654    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:25.403432    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:26.404412    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:26.404412    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.407565    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:26.407565    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:26.407565    3816 type.go:168] "Request Body" body=""
	I1205 06:38:26.407565    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:26.410520    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:27.410783    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:27.410783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:27.413528    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:28.415022    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:28.415022    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:28.418437    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:29.419313    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:29.419313    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:29.422536    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:30.423342    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:30.423497    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:30.426178    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:31.426933    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:31.426933    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:31.430144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:32.430929    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:32.430929    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:32.434479    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:33.434863    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:33.434863    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:33.437682    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:34.437924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:34.437924    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:34.440945    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:35.442134    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:35.442134    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:35.444908    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:36.445071    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:36.445071    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.448284    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:36.448309    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:36.448309    3816 type.go:168] "Request Body" body=""
	I1205 06:38:36.448309    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:36.450897    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:37.451653    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:37.451944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:37.455778    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:38.456494    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:38.456494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:38.459476    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:39.459817    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:39.460047    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:39.462801    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:40.464111    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:40.464111    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:40.467438    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:41.468570    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:41.468570    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:41.471499    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:42.471858    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:42.471858    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:42.475786    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:43.476207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:43.476207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:43.479798    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:44.480584    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:44.480584    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:44.482596    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:45.483834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:45.483834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:45.488465    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:38:46.488899    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:46.488899    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.492762    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:38:46.492857    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:46.493009    3816 type.go:168] "Request Body" body=""
	I1205 06:38:46.493069    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:46.495877    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:47.496162    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:47.496162    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:47.499015    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:48.499326    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:48.499326    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:48.503120    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:49.503509    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:49.503509    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:49.506339    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:50.507027    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:50.507403    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:50.509404    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:51.510410    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:51.510410    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:51.513676    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:52.514297    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:52.514297    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:52.517647    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:53.517908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:53.517908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:53.520862    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:38:54.521180    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:54.521180    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:54.524895    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:55.526048    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:55.526048    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:55.529345    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:56.529859    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:56.529859    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.532804    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:38:56.532932    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:38:56.533087    3816 type.go:168] "Request Body" body=""
	I1205 06:38:56.533133    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:56.534781    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:38:57.535534    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:57.535534    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:57.538765    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:58.538928    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:58.538928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:58.542189    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:38:59.542538    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:38:59.542538    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:38:59.545041    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:00.545961    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:00.545961    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:00.549272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:01.550020    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:01.550020    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:01.553982    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:02.554834    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:02.554834    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:02.557878    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:03.558082    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:03.558082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:03.560631    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:04.561450    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:04.561450    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:04.564816    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:05.565884    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:05.565884    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:05.568807    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:06.569924    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:06.570101    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.573050    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:06.573172    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:06.573295    3816 type.go:168] "Request Body" body=""
	I1205 06:39:06.573378    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:06.577668    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:07.578044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:07.578044    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:07.580203    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:08.581555    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:08.581760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:08.584347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:09.585050    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:09.585050    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:09.587469    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:10.588187    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:10.588187    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:10.592992    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:11.593285    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:11.593285    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:11.596552    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:12.597368    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:12.597368    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:12.599206    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:39:13.600760    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:13.600760    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:13.604095    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:14.604815    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:14.604815    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:14.607416    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:15.607824    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:15.607824    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:15.611182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:16.612388    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:16.612388    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.615128    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:16.615128    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:16.615128    3816 type.go:168] "Request Body" body=""
	I1205 06:39:16.615128    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:16.617381    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:17.617837    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:17.617837    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:17.621309    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:18.622420    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:18.622420    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:18.625659    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:19.626064    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:19.626064    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:19.630047    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:20.631021    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:20.631425    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:20.634272    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:21.634593    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:21.634593    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:21.637617    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:22.638437    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:22.638928    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:22.642027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:23.643026    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:23.643026    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:23.646144    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:24.646864    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:24.647232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:24.650759    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:25.651017    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:25.651017    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:25.654375    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:26.655043    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:26.655043    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.658286    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:39:26.658286    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:26.658286    3816 type.go:168] "Request Body" body=""
	I1205 06:39:26.658286    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:26.660775    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:27.661714    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:27.661714    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:27.667334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:39:28.667862    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:28.667862    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:28.672081    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:29.672167    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:29.672167    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:29.674745    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:30.676280    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:30.676280    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:30.679395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:31.679835    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:31.679835    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:31.682978    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:32.684077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:32.684077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:32.686823    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:33.687836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:33.687836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:33.691156    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:34.691521    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:34.691521    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:34.693937    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:35.694845    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:35.694845    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:35.698294    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:36.699532    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:36.699532    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.702195    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:36.702717    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:36.702862    3816 type.go:168] "Request Body" body=""
	I1205 06:39:36.702916    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:36.706473    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:37.707504    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:37.707504    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:37.710813    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:38.710939    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:38.711535    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:38.716232    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:39:39.717207    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:39.717207    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:39.720152    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:40.720331    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:40.720331    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:40.722990    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:41.723691    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:41.723691    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:41.726966    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:42.727268    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:42.727268    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:42.731157    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:43.731449    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:43.731449    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:43.733873    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:44.734365    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:44.734365    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:44.737250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:45.738219    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:45.738219    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:45.741606    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:46.742116    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:46.742448    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.744702    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:46.745230    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:46.745415    3816 type.go:168] "Request Body" body=""
	I1205 06:39:46.745518    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:46.747577    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:47.748110    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:47.748110    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:47.751287    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:48.751998    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:48.751998    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:48.755225    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:49.756362    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:49.756362    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:49.758876    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:50.759512    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:50.759512    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:50.762228    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:51.762926    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:51.762926    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:51.766327    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:52.766951    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:52.766951    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:52.770535    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:53.771298    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:53.771298    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:53.774215    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:54.774580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:54.774580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:54.777547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:55.778421    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:55.778421    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:55.781650    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:56.782155    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:56.783007    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.785844    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:39:56.785844    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:39:56.785844    3816 type.go:168] "Request Body" body=""
	I1205 06:39:56.785844    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:56.788526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:57.788851    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:57.788851    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:57.791811    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:39:58.792393    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:58.792393    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:58.796105    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:39:59.796407    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:39:59.796407    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:39:59.799250    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:00.799796    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:00.799796    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:00.803018    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:01.803711    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:01.803711    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:01.806363    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:02.806549    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:02.806979    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:02.810046    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:03.810372    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:03.810808    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:03.813835    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:04.814104    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:04.814104    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:04.817217    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:05.817542    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:05.817985    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:05.820814    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:06.821479    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:06.821479    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.825616    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:06.825616    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:06.825616    3816 type.go:168] "Request Body" body=""
	I1205 06:40:06.825616    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:06.828168    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:07.828495    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:07.828495    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:07.831826    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:08.832009    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:08.832009    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:08.834677    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:09.834944    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:09.834944    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:09.838182    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:10.838841    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:10.838841    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:10.842122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:11.842336    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:11.842336    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:11.845418    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:12.846381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:12.846722    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:12.849321    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:13.849671    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:13.850100    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:13.852968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:14.853642    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:14.853642    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:14.856503    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:15.856908    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:15.856908    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:15.861027    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:16.862019    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:16.862328    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.864135    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:40:16.864135    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:16.864135    3816 type.go:168] "Request Body" body=""
	I1205 06:40:16.864652    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:16.866384    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:17.867632    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:17.867632    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:17.870561    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:18.871085    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:18.871085    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:18.874523    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:19.874746    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:19.874746    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:19.877529    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:20.878119    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:20.878119    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:20.881395    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:21.881716    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:21.881716    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:21.884145    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:22.884876    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:22.884876    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:22.887889    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:23.888341    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:23.888494    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:23.891334    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:24.891830    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:24.891830    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:24.895547    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:25.896077    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:25.896077    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:25.898755    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:26.899940    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:26.899940    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.903829    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1205 06:40:26.903925    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:26.904028    3816 type.go:168] "Request Body" body=""
	I1205 06:40:26.904082    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:26.907442    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:27.907744    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:27.907744    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:27.911092    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:28.911316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:28.911316    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:28.914347    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:29.914739    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:29.914739    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:29.918366    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:30.918822    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:30.918822    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:30.921456    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:31.922028    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:31.922028    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:31.925069    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:32.925330    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:32.925330    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:32.928779    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:33.929376    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:33.929376    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:33.933212    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:34.933571    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:34.933571    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:34.936160    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:35.937442    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:35.937442    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:35.941103    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:36.941232    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:36.941232    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.943558    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:36.943558    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:36.943558    3816 type.go:168] "Request Body" body=""
	I1205 06:40:36.943558    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:36.946031    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:40:37.946448    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:37.946847    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:37.949586    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:38.949756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:38.950157    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:38.952901    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:39.953375    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:39.953783    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:39.956248    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:40.957703    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:40.957703    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:40.960899    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:41.961836    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:41.961836    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:41.965167    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:42.965316    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:42.965560    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:42.968007    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:43.968734    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:43.968734    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:43.971410    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:44.972311    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:44.972311    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:44.975433    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:45.976381    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:45.976381    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:45.981080    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1205 06:40:46.981463    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:46.981463    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.986037    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1205 06:40:46.986125    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): Get "https://127.0.0.1:55398/api/v1/nodes/functional-247800": EOF
	I1205 06:40:46.986226    3816 type.go:168] "Request Body" body=""
	I1205 06:40:46.986226    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:46.989122    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:47.989324    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:47.989324    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:47.992720    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:48.992852    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:48.992852    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:48.995205    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:49.995580    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:49.995580    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:49.998526    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:50.998794    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:50.998794    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:51.001637    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:52.002658    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:52.002658    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:52.004968    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:53.005044    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:53.005445    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:53.008445    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1205 06:40:54.009089    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:54.009089    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:54.012447    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1205 06:40:55.012756    3816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:55398/api/v1/nodes/functional-247800"
	I1205 06:40:55.012756    3816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:55398/api/v1/nodes/functional-247800" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1205 06:40:55.015364    3816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1205 06:40:55.523386    3816 node_ready.go:55] error getting node "functional-247800" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 06:40:55.523386    3816 node_ready.go:38] duration metric: took 6m0.0010607s for node "functional-247800" to be "Ready" ...
	I1205 06:40:55.527309    3816 out.go:203] 
	W1205 06:40:55.529851    3816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:40:55.529851    3816 out.go:285] * 
	W1205 06:40:55.531579    3816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:40:55.533404    3816 out.go:203] 
	
	
	==> Docker <==
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.520999227Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521005327Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521028530Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.521065534Z" level=info msg="Initializing buildkit"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.631468044Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636567622Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725240Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636825651Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:34:51 functional-247800 dockerd[11011]: time="2025-12-05T06:34:51.636725440Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:34:51 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:51 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:34:51 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:34:52 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:34:52 functional-247800 cri-dockerd[11329]: time="2025-12-05T06:34:52Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:34:52 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:44:00.735556   21628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:00.736602   21628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:00.737385   21628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:00.739581   21628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:00.740339   21628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001158] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001030] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001035] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000969] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000975] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:34] CPU: 4 PID: 56451 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000864] RIP: 0033:0x7f46c5e3eb20
	[  +0.000406] Code: Unable to access opcode bytes at RIP 0x7f46c5e3eaf6.
	[  +0.000950] RSP: 002b:00007fff1eb3d7e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001108] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001199] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000983] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000845] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000799] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000884] FS:  0000000000000000 GS:  0000000000000000
	[  +0.829311] CPU: 0 PID: 56573 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000781] RIP: 0033:0x7f241df52b20
	[  +0.000533] Code: Unable to access opcode bytes at RIP 0x7f241df52af6.
	[  +0.000663] RSP: 002b:00007ffded7fa4e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000781] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:44:00 up  2:17,  0 user,  load average: 0.51, 0.41, 0.58
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:43:57 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:57 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1061.
	Dec 05 06:43:57 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:57 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:57 functional-247800 kubelet[21469]: E1205 06:43:57.975805   21469 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:57 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:57 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:58 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1062.
	Dec 05 06:43:58 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:58 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:58 functional-247800 kubelet[21482]: E1205 06:43:58.728720   21482 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:58 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:58 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:59 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1063.
	Dec 05 06:43:59 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:59 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:59 functional-247800 kubelet[21508]: E1205 06:43:59.481281   21508 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:59 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:59 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:00 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1064.
	Dec 05 06:44:00 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:00 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:00 functional-247800 kubelet[21559]: E1205 06:44:00.241856   21559 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:00 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:00 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (613.3358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (54.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (743.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-247800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:45:22.614359    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:46:45.687907    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:23.908440    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:50:22.618619    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:50:26.985234    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:52:23.913371    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:55:22.623233    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-247800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m20.2070556s)

                                                
                                                
-- stdout --
	* [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001375391s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-247800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m20.2158678s for "functional-247800" cluster.
I1205 06:56:22.433449    8036 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (640.139ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.3642461s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-088800 image ls --format yaml --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ ssh     │ functional-088800 ssh pgrep buildkitd                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ image   │ functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr                  │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls                                                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format json --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format table --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ delete  │ -p functional-088800                                                                                                    │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:26 UTC │
	│ start   │ -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:26 UTC │                     │
	│ start   │ -p functional-247800 --alsologtostderr -v=8                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:34 UTC │                     │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:41 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:latest                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add minikube-local-cache-test:functional-247800                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache delete minikube-local-cache-test:functional-247800                                              │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl images                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ cache   │ functional-247800 cache reload                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ kubectl │ functional-247800 kubectl -- --context functional-247800 get pods                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ start   │ -p functional-247800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:44:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:44:02.272034    7212 out.go:360] Setting OutFile to fd 1444 ...
	I1205 06:44:02.317383    7212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:02.317383    7212 out.go:374] Setting ErrFile to fd 2004...
	I1205 06:44:02.317383    7212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:02.332249    7212 out.go:368] Setting JSON to false
	I1205 06:44:02.336248    7212 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8300,"bootTime":1764908742,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:44:02.336248    7212 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:44:02.343248    7212 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:44:02.346834    7212 notify.go:221] Checking for updates...
	I1205 06:44:02.346834    7212 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:44:02.349109    7212 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:44:02.350847    7212 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:44:02.353405    7212 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:44:02.355242    7212 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:44:02.357599    7212 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:02.357599    7212 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:44:02.542801    7212 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:44:02.547077    7212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:02.784844    7212 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-05 06:44:02.759817606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:44:02.788514    7212 out.go:179] * Using the docker driver based on existing profile
	I1205 06:44:02.790794    7212 start.go:309] selected driver: docker
	I1205 06:44:02.790794    7212 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:02.790794    7212 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:44:02.797110    7212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:03.043306    7212 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-05 06:44:03.019620575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:44:03.123839    7212 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:44:03.123839    7212 cni.go:84] Creating CNI manager for ""
	I1205 06:44:03.123839    7212 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:44:03.123839    7212 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:03.128293    7212 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:44:03.130664    7212 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:44:03.134094    7212 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:44:03.137567    7212 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:44:03.137567    7212 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:44:03.180283    7212 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:44:03.219602    7212 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:44:03.219602    7212 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:44:03.490854    7212 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:44:03.491134    7212 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:44:03.493285    7212 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:44:03.493386    7212 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:03.493386    7212 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-247800"
	I1205 06:44:03.493386    7212 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:44:03.493386    7212 fix.go:54] fixHost starting: 
	I1205 06:44:03.504606    7212 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:44:03.588000    7212 fix.go:112] recreateIfNeeded on functional-247800: state=Running err=<nil>
	W1205 06:44:03.588000    7212 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:44:03.607696    7212 out.go:252] * Updating the running docker "functional-247800" container ...
	I1205 06:44:03.607696    7212 machine.go:94] provisionDockerMachine start ...
	I1205 06:44:03.620462    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:03.791695    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:03.792694    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:03.792694    7212 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:44:04.191189    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:44:04.191189    7212 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:44:04.196954    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:04.962117    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:04.963119    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:04.963119    7212 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:44:05.528862    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:44:05.533862    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:05.785961    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:05.785961    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:05.785961    7212 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:44:05.993200    7212 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:05.993991    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:44:05.994386    7212 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.5030361s
	I1205 06:44:05.994386    7212 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:44:05.994965    7212 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:05.996380    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:44:05.996380    7212 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.5050299s
	I1205 06:44:05.996380    7212 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:44:06.001965    7212 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.001965    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:44:06.001965    7212 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.5106152s
	I1205 06:44:06.001965    7212 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:44:06.024972    7212 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.025248    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:44:06.025248    7212 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.5338979s
	I1205 06:44:06.025248    7212 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:44:06.030397    7212 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.030653    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:44:06.030804    7212 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.5394539s
	I1205 06:44:06.030804    7212 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:44:06.057622    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:44:06.057686    7212 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:44:06.057828    7212 ubuntu.go:190] setting up certificates
	I1205 06:44:06.057876    7212 provision.go:84] configureAuth start
	I1205 06:44:06.063201    7212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:44:06.079402    7212 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.079402    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:44:06.079402    7212 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.5880514s
	I1205 06:44:06.079402    7212 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:44:06.127402    7212 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.127402    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:44:06.127402    7212 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.6360504s
	I1205 06:44:06.127402    7212 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:44:06.127402    7212 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.128401    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:44:06.128401    7212 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.6370492s
	I1205 06:44:06.128401    7212 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:44:06.128401    7212 cache.go:87] Successfully saved all images to host disk.
	I1205 06:44:06.133387    7212 provision.go:143] copyHostCerts
	I1205 06:44:06.133387    7212 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:44:06.133387    7212 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:44:06.134387    7212 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:44:06.134387    7212 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:44:06.135392    7212 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:44:06.135392    7212 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:44:06.135392    7212 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:44:06.135392    7212 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:44:06.136402    7212 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:44:06.136402    7212 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:44:06.163392    7212 provision.go:177] copyRemoteCerts
	I1205 06:44:06.167399    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:44:06.170398    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:06.226397    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:06.360157    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:44:06.390856    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:44:06.422898    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:44:06.452624    7212 provision.go:87] duration metric: took 394.7423ms to configureAuth
	I1205 06:44:06.452624    7212 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:44:06.452624    7212 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:06.457638    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:06.514727    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:06.514768    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:06.514768    7212 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:44:06.696044    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:44:06.696090    7212 ubuntu.go:71] root file system type: overlay
	I1205 06:44:06.696090    7212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:44:06.699335    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:06.754511    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:06.755263    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:06.755357    7212 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:44:06.951048    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:44:06.954929    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.012752    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:07.013752    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:07.013752    7212 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:44:07.221929    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:44:07.221949    7212 machine.go:97] duration metric: took 3.6142004s to provisionDockerMachine
	I1205 06:44:07.221974    7212 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:44:07.221974    7212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:44:07.226668    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:44:07.229222    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.288022    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.425061    7212 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:44:07.435656    7212 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:44:07.435656    7212 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:44:07.435656    7212 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:44:07.436190    7212 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:44:07.437151    7212 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:44:07.437615    7212 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:44:07.442100    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:44:07.458772    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:44:07.490927    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:44:07.521512    7212 start.go:296] duration metric: took 299.5056ms for postStartSetup
	I1205 06:44:07.526199    7212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:44:07.528904    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.584765    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.708107    7212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:44:07.715598    7212 fix.go:56] duration metric: took 4.2221494s for fixHost
	I1205 06:44:07.716591    7212 start.go:83] releasing machines lock for "functional-247800", held for 4.2221494s
	I1205 06:44:07.719938    7212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:44:07.774650    7212 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:44:07.778633    7212 ssh_runner.go:195] Run: cat /version.json
	I1205 06:44:07.779199    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.781778    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.835000    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.846698    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.959833    7212 ssh_runner.go:195] Run: systemctl --version
	W1205 06:44:07.966184    7212 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:44:07.976576    7212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:44:07.985928    7212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:44:07.990302    7212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:44:08.006960    7212 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:44:08.006960    7212 start.go:496] detecting cgroup driver to use...
	I1205 06:44:08.006960    7212 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:44:08.007486    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:44:08.037172    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:44:08.060370    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:44:08.076873    7212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:44:08.081935    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1205 06:44:08.088262    7212 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:44:08.088262    7212 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:44:08.102235    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:44:08.120429    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:44:08.138453    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:44:08.157604    7212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:44:08.178745    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:44:08.197474    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:44:08.219535    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:44:08.241784    7212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:44:08.262205    7212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:44:08.281639    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:08.508817    7212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:44:08.800623    7212 start.go:496] detecting cgroup driver to use...
	I1205 06:44:08.800623    7212 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:44:08.805535    7212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:44:08.829203    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:44:08.853336    7212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:44:08.916688    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:44:08.939467    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:44:08.959334    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:44:08.987138    7212 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:44:08.999563    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:44:09.015960    7212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:44:09.041179    7212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:44:09.185621    7212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:44:09.352956    7212 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:44:09.352956    7212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:44:09.378298    7212 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:44:09.400179    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:09.536455    7212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:44:10.536962    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:44:10.559790    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:44:10.581363    7212 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 06:44:10.609733    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:44:10.632909    7212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:44:10.776807    7212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:44:10.916613    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:11.075698    7212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:44:11.101329    7212 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:44:11.124502    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:11.266418    7212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:44:11.403053    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:44:11.422521    7212 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:44:11.426547    7212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:44:11.436721    7212 start.go:564] Will wait 60s for crictl version
	I1205 06:44:11.441180    7212 ssh_runner.go:195] Run: which crictl
	I1205 06:44:11.452770    7212 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:44:11.501872    7212 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:44:11.505976    7212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:44:11.549324    7212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:44:11.588516    7212 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:44:11.591303    7212 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:44:11.795650    7212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:44:11.800040    7212 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:44:11.812421    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:11.871146    7212 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:44:11.873758    7212 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:44:11.873758    7212 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:44:11.877101    7212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:44:11.912208    7212 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-247800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1205 06:44:11.912283    7212 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:44:11.912321    7212 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:44:11.912565    7212 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:44:11.916049    7212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:44:12.318628    7212 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:44:12.318628    7212 cni.go:84] Creating CNI manager for ""
	I1205 06:44:12.318628    7212 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:44:12.318628    7212 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:44:12.318628    7212 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:44:12.318628    7212 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:44:12.323147    7212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:44:12.338722    7212 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:44:12.342793    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:44:12.357028    7212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:44:12.378067    7212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:44:12.397995    7212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1205 06:44:12.425172    7212 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:44:12.436596    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:12.576722    7212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:44:12.598561    7212 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:44:12.598561    7212 certs.go:195] generating shared ca certs ...
	I1205 06:44:12.598561    7212 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:44:12.599202    7212 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:44:12.599202    7212 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:44:12.599202    7212 certs.go:257] generating profile certs ...
	I1205 06:44:12.600184    7212 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:44:12.600278    7212 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:44:12.600278    7212 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:44:12.601471    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:44:12.601693    7212 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:44:12.601727    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:44:12.601917    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:44:12.602080    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:44:12.602241    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:44:12.602561    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:44:12.604739    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:44:12.633587    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:44:12.661761    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:44:12.693749    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:44:12.724397    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:44:12.753386    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:44:12.782245    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:44:12.808447    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:44:12.837845    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:44:12.868598    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:44:12.897877    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:44:12.923594    7212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:44:12.948661    7212 ssh_runner.go:195] Run: openssl version
	I1205 06:44:12.969868    7212 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:44:12.988567    7212 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:44:13.010500    7212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:44:13.020473    7212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:44:13.025024    7212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:44:13.078161    7212 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:44:13.096521    7212 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.112321    7212 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:44:13.130493    7212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.138877    7212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.143299    7212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.190013    7212 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:44:13.206117    7212 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.222622    7212 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:44:13.239301    7212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.246185    7212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.249183    7212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.298930    7212 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:44:13.315886    7212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:44:13.326014    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:44:13.380225    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:44:13.429527    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:44:13.479032    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:44:13.536127    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:44:13.583832    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:44:13.629178    7212 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:13.633659    7212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:44:13.671791    7212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:44:13.685483    7212 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:44:13.685483    7212 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:44:13.690488    7212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:44:13.703539    7212 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.707834    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:13.761963    7212 kubeconfig.go:125] found "functional-247800" server: "https://127.0.0.1:55398"
	I1205 06:44:13.770445    7212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:44:13.785736    7212 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:26:36.498184726 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:44:12.408045869 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:44:13.785736    7212 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:44:13.789544    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:44:13.823105    7212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:44:13.848716    7212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:44:13.861649    7212 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  5 06:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  5 06:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  5 06:30 /etc/kubernetes/scheduler.conf
	
	I1205 06:44:13.866874    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:44:13.884456    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:44:13.897824    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.902988    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:44:13.923754    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:44:13.938317    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.942723    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:44:13.963344    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:44:13.977185    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.982171    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:44:13.999803    7212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:44:14.022527    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:14.262599    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:14.847747    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:15.087926    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:15.158153    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:15.213568    7212 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:44:15.218358    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:15.718866    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:16.219377    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:16.718800    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:17.219013    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:17.719391    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:18.218162    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:18.719428    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:19.218871    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:19.718583    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:20.218977    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:20.719394    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:21.218952    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:21.719352    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:22.221520    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:22.719541    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:23.221712    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:23.719439    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:24.219993    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:24.719639    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:25.220510    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:25.718604    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:26.218568    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:26.718719    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:27.218702    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:27.720279    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:28.218685    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:28.718815    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:29.218860    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:29.719920    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:30.219980    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:30.719760    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:31.219358    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:31.718380    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:32.219353    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:32.719686    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:33.219191    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:33.720015    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:34.219354    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:34.719612    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:35.220187    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:35.719684    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:36.217550    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:36.719346    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:37.218126    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:37.719656    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:38.219064    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:38.720199    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:39.217424    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:39.720728    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:40.219550    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:40.718837    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:41.219035    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:41.719498    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:42.219112    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:42.719629    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:43.219394    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:43.719065    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:44.219354    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:44.719496    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:45.219596    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:45.719465    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:46.219434    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:46.721684    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:47.219415    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:47.720218    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:48.219016    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:48.719478    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:49.219299    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:49.720284    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:50.219027    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:50.719182    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:51.219943    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:51.720136    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:52.220171    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:52.719617    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:53.219322    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:53.719426    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:54.219376    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:54.720770    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:55.219975    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:55.720474    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:56.219929    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:56.718428    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:57.221381    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:57.719722    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:58.220548    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:58.719924    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:59.218934    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:59.721252    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:00.219208    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:00.719872    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:01.219506    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:01.719128    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:02.221833    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:02.719691    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:03.220401    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:03.719909    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:04.219559    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:04.719712    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:05.221455    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:05.719176    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:06.219747    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:06.720230    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:07.219566    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:07.722445    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:08.219220    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:08.720125    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:09.219700    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:09.719445    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:10.219069    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:10.720384    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:11.219469    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:11.718868    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:12.220494    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:12.719956    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:13.220207    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:13.719219    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:14.219300    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:14.719308    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:15.220591    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:15.359365    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.359365    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:15.363072    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:15.396147    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.396147    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:15.400087    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:15.427850    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.427850    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:15.432163    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:15.465699    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.465738    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:15.470379    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:15.497629    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.497629    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:15.501723    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:15.532988    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.532988    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:15.536536    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:15.566283    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.566283    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:15.566312    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:15.566312    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:15.596491    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:15.596491    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:15.856069    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:15.847392   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.848413   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.849846   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.851083   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.852149   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:15.847392   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.848413   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.849846   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.851083   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.852149   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:15.856069    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:15.856069    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:15.909731    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:15.909731    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:16.118756    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:16.118756    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:18.687756    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:18.710448    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:18.741748    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.741748    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:18.745513    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:18.775360    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.775360    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:18.779658    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:18.809441    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.809501    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:18.813014    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:18.838816    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.838816    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:18.844145    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:18.873602    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.873602    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:18.877250    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:18.905073    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.905073    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:18.909137    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:18.936411    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.936411    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:18.936411    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:18.936411    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:18.998916    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:18.998916    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:19.033230    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:19.033230    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:19.127028    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:19.115750   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.116628   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.119350   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.120153   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.122188   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:19.115750   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.116628   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.119350   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.120153   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.122188   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:19.127028    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:19.127028    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:19.167683    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:19.167683    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:21.730298    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:21.753423    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:21.784095    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.784095    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:21.787764    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:21.817963    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.817963    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:21.821515    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:21.850539    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.850539    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:21.854672    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:21.884098    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.884098    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:21.887228    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:21.917593    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.917593    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:21.921273    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:21.949149    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.949149    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:21.955019    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:21.983212    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.983212    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:21.983212    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:21.983212    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:22.012499    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:22.012499    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:22.098043    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:22.089093   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.090339   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.091481   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.093690   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.095138   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:22.089093   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.090339   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.091481   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.093690   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.095138   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:22.098043    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:22.098090    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:22.141887    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:22.141887    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:22.194066    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:22.194066    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:24.762325    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:24.785756    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:24.815636    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.815636    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:24.819508    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:24.847760    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.847760    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:24.851370    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:24.881012    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.881012    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:24.884680    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:24.912270    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.912270    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:24.916105    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:24.953416    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.953416    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:24.956423    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:24.990968    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.990968    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:24.994533    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:25.027815    7212 logs.go:282] 0 containers: []
	W1205 06:45:25.027815    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:25.027815    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:25.027815    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:25.071824    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:25.071824    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:25.123386    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:25.123386    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:25.186859    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:25.186859    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:25.219822    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:25.219822    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:25.305505    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:25.292510   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.293380   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.296569   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.299079   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.300434   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:25.292510   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.293380   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.296569   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.299079   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.300434   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:27.811804    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:27.835330    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:27.868714    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.868714    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:27.872277    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:27.903268    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.903268    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:27.906779    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:27.936644    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.936644    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:27.940640    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:27.969693    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.969693    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:27.973532    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:28.000473    7212 logs.go:282] 0 containers: []
	W1205 06:45:28.000547    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:28.004187    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:28.046248    7212 logs.go:282] 0 containers: []
	W1205 06:45:28.046248    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:28.050184    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:28.081528    7212 logs.go:282] 0 containers: []
	W1205 06:45:28.081528    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:28.081528    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:28.081528    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:28.144979    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:28.144979    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:28.176452    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:28.177413    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:28.262251    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:28.249064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.249788   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253017   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253886   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.257064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:28.249064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.249788   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253017   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253886   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.257064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:28.262273    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:28.262273    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:28.303948    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:28.303948    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:30.856838    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:30.882687    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:30.912491    7212 logs.go:282] 0 containers: []
	W1205 06:45:30.912491    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:30.915939    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:30.944823    7212 logs.go:282] 0 containers: []
	W1205 06:45:30.944823    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:30.948477    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:30.977785    7212 logs.go:282] 0 containers: []
	W1205 06:45:30.977785    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:30.981554    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:31.007905    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.007905    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:31.012068    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:31.041316    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.041365    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:31.044854    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:31.073275    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.073313    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:31.076949    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:31.106563    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.106563    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:31.106563    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:31.106563    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:31.168102    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:31.169096    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:31.199523    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:31.199523    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:31.278155    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:31.270054   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271046   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271939   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.274002   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.275003   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:31.270054   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271046   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271939   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.274002   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.275003   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:31.278155    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:31.278155    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:31.318537    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:31.318537    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:33.870845    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:33.895814    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:33.927486    7212 logs.go:282] 0 containers: []
	W1205 06:45:33.927522    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:33.931201    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:33.958957    7212 logs.go:282] 0 containers: []
	W1205 06:45:33.958957    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:33.962725    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:33.989625    7212 logs.go:282] 0 containers: []
	W1205 06:45:33.989687    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:33.993181    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:34.023503    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.023522    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:34.027565    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:34.061896    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.061896    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:34.065443    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:34.096984    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.096984    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:34.101057    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:34.131058    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.131123    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:34.131123    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:34.131123    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:34.196576    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:34.196576    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:34.225898    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:34.225898    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:34.311791    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:34.301898   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.303342   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.305061   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.306151   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.307428   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:34.301898   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.303342   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.305061   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.306151   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.307428   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:34.311791    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:34.311791    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:34.354337    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:34.354337    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:36.906318    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:36.928488    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:36.957454    7212 logs.go:282] 0 containers: []
	W1205 06:45:36.957454    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:36.961333    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:36.988983    7212 logs.go:282] 0 containers: []
	W1205 06:45:36.988983    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:36.992781    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:37.022093    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.022125    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:37.025722    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:37.057399    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.057399    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:37.060912    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:37.087172    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.087226    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:37.090387    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:37.119994    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.119994    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:37.123787    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:37.151631    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.151631    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:37.151631    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:37.151631    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:37.195262    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:37.195262    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:37.246012    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:37.246080    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:37.316036    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:37.316036    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:37.345867    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:37.345867    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:37.426410    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:37.416173   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.417045   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.420344   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.421777   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.422945   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:37.416173   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.417045   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.420344   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.421777   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.422945   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:39.932657    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:39.956038    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:39.984625    7212 logs.go:282] 0 containers: []
	W1205 06:45:39.984653    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:39.988440    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:40.017153    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.017153    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:40.020736    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:40.052553    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.052621    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:40.056300    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:40.085219    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.085219    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:40.089578    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:40.121915    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.121915    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:40.125581    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:40.154622    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.154673    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:40.158465    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:40.188578    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.188578    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:40.188578    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:40.188578    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:40.245066    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:40.245066    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:40.305771    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:40.305771    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:40.337088    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:40.337088    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:40.418759    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:40.409806   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.410826   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.412482   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.414429   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.415726   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:40.409806   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.410826   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.412482   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.414429   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.415726   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:40.419320    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:40.419320    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:42.967507    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:42.991075    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:43.021034    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.021108    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:43.024790    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:43.053883    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.053883    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:43.057674    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:43.088625    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.088625    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:43.092086    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:43.119636    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.119636    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:43.122763    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:43.150111    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.150111    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:43.154265    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:43.182836    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.182836    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:43.186792    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:43.225828    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.225828    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:43.225828    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:43.225828    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:43.290065    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:43.290065    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:43.321138    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:43.321138    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:43.398577    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:43.389973   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.390880   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.393291   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.394242   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.395582   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:43.389973   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.390880   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.393291   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.394242   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.395582   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:43.398577    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:43.398577    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:43.439980    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:43.439980    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:46.001165    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:46.028196    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:46.061568    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.061568    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:46.065437    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:46.095425    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.095470    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:46.099504    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:46.130002    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.130002    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:46.133511    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:46.162609    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.162689    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:46.166324    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:46.195578    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.195578    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:46.199354    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:46.228354    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.228354    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:46.232169    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:46.261558    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.261595    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:46.261595    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:46.261623    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:46.304385    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:46.304385    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:46.359760    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:46.359760    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:46.422582    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:46.422582    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:46.452110    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:46.452110    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:46.530734    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:46.522774   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.523673   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.525329   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.526302   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.527328   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:46.522774   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.523673   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.525329   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.526302   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.527328   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:49.036286    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:49.060305    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:49.095037    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.095063    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:49.098656    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:49.128743    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.128778    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:49.132200    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:49.165097    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.165097    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:49.168869    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:49.200301    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.200301    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:49.203308    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:49.237385    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.237385    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:49.240910    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:49.270260    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.270293    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:49.273438    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:49.302145    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.302145    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:49.302145    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:49.302145    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:49.366684    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:49.366684    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:49.396497    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:49.396497    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:49.481456    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:49.471608   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.472504   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.475721   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.477167   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.478188   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:49.471608   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.472504   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.475721   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.477167   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.478188   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:49.481456    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:49.481496    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:49.524124    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:49.525124    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:52.084310    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:52.107012    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:52.137266    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.137266    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:52.142096    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:52.169325    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.169325    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:52.174093    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:52.204247    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.205151    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:52.208943    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:52.238232    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.238322    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:52.241769    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:52.269688    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.269688    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:52.273627    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:52.303607    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.303607    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:52.307182    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:52.337626    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.337626    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:52.337626    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:52.337626    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:52.398186    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:52.398186    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:52.428798    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:52.428798    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:52.514157    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:52.505332   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.506562   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.508566   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.509865   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.511864   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:52.505332   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.506562   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.508566   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.509865   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.511864   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:52.514157    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:52.514157    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:52.558771    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:52.558771    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:55.113907    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:55.143620    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:55.174228    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.174228    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:55.179458    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:55.209480    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.209480    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:55.213349    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:55.242540    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.242540    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:55.246462    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:55.276353    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.276353    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:55.280471    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:55.308841    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.308841    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:55.312911    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:55.341094    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.341094    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:55.344858    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:55.375031    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.375031    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:55.375031    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:55.375031    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:55.437561    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:55.437561    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:55.473071    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:55.473071    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:55.550825    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:55.539067   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.541138   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.542837   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.543977   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.545029   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:55.539067   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.541138   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.542837   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.543977   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.545029   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:55.550825    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:55.550825    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:55.593704    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:55.593704    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:58.150849    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:58.173353    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:58.208754    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.208818    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:58.212164    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:58.243761    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.243761    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:58.250955    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:58.281367    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.281367    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:58.284495    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:58.316967    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.316967    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:58.320494    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:58.348625    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.348625    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:58.352160    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:58.381869    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.381903    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:58.385500    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:58.414468    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.414468    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:58.414468    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:58.414468    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:58.477173    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:58.477173    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:58.510921    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:58.510921    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:58.588841    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:58.578179   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.579030   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.581977   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.583255   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.584598   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:58.578179   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.579030   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.581977   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.583255   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.584598   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:58.588841    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:58.588841    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:58.631288    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:58.631288    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:01.185827    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:01.211669    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:01.240318    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.240318    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:01.244369    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:01.272954    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.272984    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:01.276875    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:01.304496    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.304496    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:01.308428    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:01.337895    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.337895    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:01.342072    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:01.371342    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.371342    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:01.375396    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:01.405645    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.405645    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:01.409318    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:01.438488    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.438488    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:01.438488    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:01.438488    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:01.501375    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:01.501375    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:01.531923    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:01.531923    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:01.611098    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:01.599379   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.600362   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.603424   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.604236   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.606692   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:01.599379   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.600362   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.603424   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.604236   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.606692   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:01.611098    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:01.611098    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:01.651778    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:01.651778    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:04.210929    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:04.234235    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:04.266339    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.266339    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:04.270369    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:04.298003    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.298003    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:04.301903    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:04.337407    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.337407    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:04.344300    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:04.372934    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.372934    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:04.376896    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:04.405443    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.405443    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:04.411712    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:04.445219    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.445219    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:04.448803    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:04.477773    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.477773    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:04.477773    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:04.477773    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:04.540878    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:04.540878    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:04.574210    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:04.574255    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:04.661787    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:04.649784   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.650558   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.654016   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.655795   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.657103   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:04.649784   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.650558   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.654016   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.655795   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.657103   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:04.661787    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:04.661828    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:04.705800    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:04.705800    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:07.260460    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:07.282560    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:07.313615    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.313615    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:07.317917    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:07.349712    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.349712    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:07.356819    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:07.386408    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.386408    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:07.391604    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:07.420438    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.420438    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:07.424140    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:07.462197    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.462237    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:07.465807    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:07.496995    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.497043    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:07.501612    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:07.531112    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.531112    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:07.531112    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:07.531112    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:07.572585    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:07.572585    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:07.640780    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:07.640816    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:07.702867    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:07.702867    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:07.735207    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:07.735207    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:07.815128    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:07.804587   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.805658   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.806988   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.808251   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.809059   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:07.804587   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.805658   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.806988   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.808251   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.809059   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:10.321242    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:10.347077    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:10.375550    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.375550    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:10.379531    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:10.409415    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.409415    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:10.413063    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:10.440057    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.440091    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:10.443652    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:10.472632    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.472632    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:10.477415    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:10.504835    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.504908    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:10.508498    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:10.536667    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.536667    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:10.540145    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:10.569461    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.569461    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:10.569461    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:10.569461    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:10.623261    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:10.623261    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:10.687563    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:10.688564    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:10.722237    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:10.722237    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:10.805565    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:10.795710   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.796624   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.799048   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.800169   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.801133   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:10.795710   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.796624   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.799048   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.800169   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.801133   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:10.805565    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:10.805565    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:13.353377    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:13.376836    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:13.408935    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.408935    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:13.412283    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:13.440589    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.440589    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:13.443942    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:13.471789    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.471789    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:13.475592    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:13.507158    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.507158    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:13.510673    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:13.539005    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.539005    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:13.542972    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:13.571336    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.571336    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:13.575544    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:13.607804    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.607804    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:13.607804    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:13.607804    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:13.659026    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:13.659026    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:13.720978    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:13.720978    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:13.749991    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:13.749991    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:13.834647    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:13.826165   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.826856   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.829290   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.830477   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.831195   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:13.826165   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.826856   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.829290   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.830477   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.831195   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:13.834647    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:13.834647    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:16.382602    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:16.405050    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:16.434952    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.434952    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:16.438639    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:16.467860    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.467860    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:16.471318    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:16.500902    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.500902    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:16.504304    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:16.532647    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.532693    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:16.536824    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:16.564360    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.564438    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:16.567706    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:16.597119    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.597119    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:16.600476    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:16.629886    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.629911    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:16.629911    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:16.629911    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:16.691374    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:16.691374    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:16.750418    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:16.750418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:16.782159    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:16.782192    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:16.862369    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:16.853414   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.854270   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.856792   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.857865   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.859334   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:16.853414   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.854270   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.856792   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.857865   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.859334   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:16.862369    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:16.862369    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:19.407336    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:19.430057    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:19.461769    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.461769    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:19.465525    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:19.491836    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.491836    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:19.495761    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:19.525324    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.525356    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:19.528800    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:19.558579    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.558579    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:19.562051    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:19.589412    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.589412    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:19.593146    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:19.622707    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.622707    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:19.625895    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:19.656952    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.656952    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:19.657020    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:19.657020    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:19.720896    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:19.720896    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:19.752072    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:19.752072    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:19.834344    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:19.826323   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.827288   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.828948   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.830277   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.831332   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:19.826323   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.827288   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.828948   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.830277   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.831332   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:19.834344    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:19.834344    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:19.878582    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:19.878582    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:22.431710    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:22.454187    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:22.485102    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.485102    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:22.488599    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:22.518242    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.518242    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:22.522246    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:22.551216    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.551216    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:22.556117    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:22.585264    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.585264    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:22.589332    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:22.622681    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.622681    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:22.626416    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:22.655508    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.655508    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:22.658877    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:22.688473    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.688473    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:22.688473    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:22.688473    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:22.731017    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:22.731017    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:22.782707    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:22.782707    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:22.844666    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:22.844666    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:22.874890    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:22.874890    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:22.957293    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:22.945687   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.946408   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.949856   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.951315   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.954701   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:22.945687   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.946408   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.949856   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.951315   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.954701   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:25.461870    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:25.481732    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:25.508887    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.508912    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:25.512223    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:25.539963    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.539963    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:25.545555    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:25.574038    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.574038    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:25.577835    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:25.609018    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.609018    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:25.612491    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:25.645093    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.645093    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:25.649133    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:25.677460    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.677534    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:25.680896    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:25.708665    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.708665    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:25.708665    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:25.708665    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:25.769723    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:25.769723    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:25.799615    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:25.799615    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:25.879893    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:25.869531   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.871260   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.872313   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.873848   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.875457   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:25.869531   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.871260   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.872313   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.873848   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.875457   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:25.879893    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:25.880074    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:25.924240    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:25.924240    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:28.483406    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:28.505297    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:28.535489    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.535489    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:28.539098    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:28.567509    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.567509    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:28.571246    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:28.599239    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.599239    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:28.603519    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:28.632376    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.632376    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:28.636087    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:28.666889    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.666889    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:28.670916    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:28.701935    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.701935    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:28.705931    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:28.732451    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.732451    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:28.732451    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:28.732451    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:28.795093    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:28.795093    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:28.825944    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:28.825944    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:28.915238    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:28.901769   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.902657   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.907833   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.908929   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.909853   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:28.901769   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.902657   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.907833   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.908929   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.909853   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:28.915238    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:28.915238    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:28.957950    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:28.957950    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:31.513277    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.535959    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:31.567005    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.567005    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:31.571004    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:31.603438    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.603438    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:31.607873    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:31.638125    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.638125    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:31.642109    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:31.669836    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.669836    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:31.673433    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:31.700830    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.700830    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:31.704893    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:31.732211    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.732211    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:31.735179    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:31.763781    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.763781    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:31.763781    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:31.763781    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:31.827964    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:31.827964    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:31.859703    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:31.859703    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:31.939017    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:31.927567   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.929616   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.930903   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.932021   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.933084   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:31.927567   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.929616   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.930903   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.932021   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.933084   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:31.939017    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:31.939017    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:31.980497    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:31.980497    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:34.540855    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.565059    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:34.595703    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.595703    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:34.599913    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:34.629039    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.629039    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:34.635378    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:34.662774    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.662774    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:34.666612    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:34.694999    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.694999    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:34.698155    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:34.730402    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.730432    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:34.734374    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:34.764670    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.764670    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:34.768238    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:34.795402    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.795402    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:34.795402    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:34.795402    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:34.843186    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:34.843186    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:34.902738    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:34.902738    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:34.931865    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:34.931865    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:35.010082    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:34.999424   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.001107   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.004121   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.005242   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.006570   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:34.999424   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.001107   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.004121   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.005242   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.006570   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:35.010082    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:35.010082    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:37.557583    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.580282    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:37.612599    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.612599    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:37.616539    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:37.647304    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.647304    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:37.650705    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:37.677699    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.677699    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:37.681372    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:37.711536    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.711536    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:37.715342    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:37.743652    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.743728    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:37.747039    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:37.775936    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.775936    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:37.779584    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:37.807810    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.807810    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:37.807810    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:37.807810    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:37.868944    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:37.868944    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:37.900495    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:37.900495    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:37.981033    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:37.968462   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.969463   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.975553   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.976449   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.978700   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:37.968462   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.969463   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.975553   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.976449   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.978700   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:37.981033    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:37.981033    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:38.029778    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:38.029778    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:40.593265    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.616423    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:40.645880    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.645880    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:40.650072    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:40.679348    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.679348    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:40.682716    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:40.711500    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.711500    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:40.715255    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:40.742163    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.742163    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:40.745881    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:40.773098    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.773098    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:40.776590    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:40.804442    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.804442    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:40.808143    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:40.835322    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.835322    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:40.835322    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:40.835322    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:40.898782    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:40.898782    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:40.929095    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:40.929095    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:41.009700    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:40.997378   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:40.998168   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.003155   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.004388   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.005337   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:40.997378   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:40.998168   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.003155   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.004388   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.005337   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:41.009700    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:41.009700    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:41.051772    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:41.051772    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:43.609754    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.632554    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:43.662166    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.662166    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:43.665355    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:43.696151    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.696219    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:43.700087    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:43.727564    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.727564    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:43.731288    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:43.758985    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.758985    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:43.762842    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:43.790701    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.790701    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:43.793863    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:43.820625    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.820693    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:43.824094    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:43.851412    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.851412    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:43.851412    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:43.851412    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:43.932012    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:43.923514   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.924816   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.925954   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.927352   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.928369   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:43.923514   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.924816   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.925954   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.927352   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.928369   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:43.932012    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:43.932012    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:43.973822    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:43.973822    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:44.030002    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:44.030002    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:44.092544    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:44.092544    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:46.629663    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.653580    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:46.683980    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.683980    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:46.687586    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:46.717184    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.717184    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:46.721065    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:46.752185    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.752185    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:46.756108    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:46.784945    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.784945    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:46.789076    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:46.816728    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.816728    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:46.820832    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:46.849937    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.849937    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:46.853438    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:46.881199    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.881199    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:46.881199    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:46.881199    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:46.962790    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:46.954028   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.954924   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.957321   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.958298   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.959408   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:46.954028   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.954924   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.957321   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.958298   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.959408   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:46.962790    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:46.962790    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:47.007820    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:47.007820    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:47.066959    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:47.066959    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:47.125526    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:47.125526    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:49.660220    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.685156    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:49.717329    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.717329    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:49.721556    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:49.750686    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.750686    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:49.755424    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:49.783846    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.783846    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:49.787710    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:49.815924    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.815924    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:49.819919    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:49.849422    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.849422    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:49.852791    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:49.881693    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.881693    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:49.885723    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:49.911812    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.911897    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:49.911897    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:49.911897    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:49.959749    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:49.959839    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:50.023079    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:50.023079    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:50.052407    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:50.053403    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:50.135599    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:50.126558   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.127490   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.129646   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.130468   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.132768   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:50.126558   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.127490   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.129646   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.130468   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.132768   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:50.135599    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:50.135599    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:52.683359    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.706979    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:52.736319    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.736342    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:52.739824    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:52.767310    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.767310    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:52.770588    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:52.804418    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.804418    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:52.808338    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:52.836067    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.836133    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:52.840112    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:52.867407    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.867407    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:52.871353    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:52.903797    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.903797    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:52.907366    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:52.937346    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.937346    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:52.937346    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:52.937346    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:52.966187    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:52.966187    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:53.057434    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:53.048926   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050108   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050951   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.053229   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.054407   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:53.048926   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050108   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050951   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.053229   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.054407   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:53.057434    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:53.057434    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:53.098631    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:53.098631    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:53.151321    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:53.151321    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:55.719442    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.742352    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:55.776348    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.776348    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:55.780248    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:55.809917    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.809917    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:55.813910    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:55.842184    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.842184    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:55.845526    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:55.873424    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.873424    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:55.877454    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:55.904884    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.904914    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:55.908497    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:55.939112    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.939192    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:55.943140    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:55.972013    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.972013    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:55.972013    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:55.972013    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:56.035906    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:56.035906    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:56.065757    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:56.065757    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:56.150728    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:56.139664   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.141024   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.142888   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.143569   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.145258   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:56.139664   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.141024   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.142888   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.143569   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.145258   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:56.150728    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:56.150728    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:56.191341    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:56.191341    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:58.747043    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.769477    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:58.799752    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.799752    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:58.803430    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:58.834902    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.834902    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:58.839294    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:58.865557    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.865557    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:58.869041    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:58.898315    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.898315    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:58.902805    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:58.930333    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.930333    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:58.934379    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:58.961514    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.961514    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:58.965260    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:58.996805    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.996805    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:58.996843    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:58.996843    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:59.046325    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:59.046325    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:59.108165    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:59.108165    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:59.139448    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:59.139448    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:59.221394    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:59.208830   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.211726   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.213247   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.214626   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.215488   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:59.208830   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.211726   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.213247   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.214626   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.215488   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:59.221394    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:59.221394    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:01.769201    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.791200    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:01.821949    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.821949    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:01.825904    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:01.853210    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.853210    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:01.856535    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:01.884013    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.884013    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:01.887952    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:01.914871    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.914871    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:01.918934    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:01.949236    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.949236    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:01.953139    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:01.981582    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.981582    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:01.985532    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:02.017739    7212 logs.go:282] 0 containers: []
	W1205 06:47:02.017739    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:02.017739    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:02.017739    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:02.080714    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:02.080714    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:02.115578    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:02.116565    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:02.197070    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:02.186132   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.187073   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.189368   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.190575   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.191559   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:02.186132   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.187073   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.189368   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.190575   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.191559   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:02.197070    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:02.197070    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:02.240876    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:02.240876    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:04.794067    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.821244    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:04.850757    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.850757    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:04.854254    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:04.885802    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.885802    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:04.890179    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:04.921162    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.921162    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:04.927483    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:04.955593    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.955593    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:04.959593    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:04.987937    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.987937    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:04.991470    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:05.021061    7212 logs.go:282] 0 containers: []
	W1205 06:47:05.021061    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:05.025471    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:05.055084    7212 logs.go:282] 0 containers: []
	W1205 06:47:05.055084    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:05.055084    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:05.055084    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:05.096463    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:05.096463    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:05.145562    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:05.145562    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:05.205614    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:05.205614    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:05.236105    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:05.236105    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:05.311644    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:05.300969   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.301790   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.302967   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.304711   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.306061   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:05.300969   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.301790   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.302967   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.304711   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.306061   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:07.817415    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.841775    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:07.870798    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.870874    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:07.874275    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:07.904822    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.904822    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:07.909419    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:07.942476    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.942476    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:07.946622    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:07.982402    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.982402    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:07.986368    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:08.018024    7212 logs.go:282] 0 containers: []
	W1205 06:47:08.018055    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:08.021599    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:08.053477    7212 logs.go:282] 0 containers: []
	W1205 06:47:08.053477    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:08.057913    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:08.086906    7212 logs.go:282] 0 containers: []
	W1205 06:47:08.086906    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:08.086906    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:08.086906    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:08.134105    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:08.134105    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:08.199234    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:08.199234    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:08.229538    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:08.229538    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:08.312358    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:08.302222   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.303399   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.304403   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.305532   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.306520   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:08.302222   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.303399   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.304403   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.305532   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.306520   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:08.312358    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:08.312358    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:10.858986    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.882487    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:10.911485    7212 logs.go:282] 0 containers: []
	W1205 06:47:10.911485    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:10.915831    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:10.942529    7212 logs.go:282] 0 containers: []
	W1205 06:47:10.942529    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:10.946167    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:10.976549    7212 logs.go:282] 0 containers: []
	W1205 06:47:10.976549    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:10.980000    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:11.007377    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.007377    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:11.011696    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:11.040104    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.040154    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:11.043924    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:11.075338    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.075338    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:11.079214    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:11.108253    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.108253    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:11.108283    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:11.108307    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:11.175507    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:11.175507    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:11.205125    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:11.205125    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:11.284350    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:11.274574   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.275635   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.276587   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.277908   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.279094   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:11.274574   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.275635   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.276587   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.277908   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.279094   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:11.284350    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:11.284350    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:11.326425    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:11.326425    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:13.882929    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.908644    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:13.938949    7212 logs.go:282] 0 containers: []
	W1205 06:47:13.938949    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:13.942723    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:13.972036    7212 logs.go:282] 0 containers: []
	W1205 06:47:13.972036    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:13.975608    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:14.006942    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.006942    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:14.010883    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:14.039783    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.039783    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:14.043702    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:14.074699    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.074699    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:14.081714    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:14.115797    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.115797    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:14.120240    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:14.148949    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.148949    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:14.149031    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:14.149031    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:14.177232    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:14.177256    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:14.253729    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:14.243636   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.244393   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.247381   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.249374   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.250396   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:14.243636   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.244393   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.247381   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.249374   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.250396   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:14.253729    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:14.253729    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:14.296929    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:14.296929    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:14.345234    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:14.345234    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:16.913879    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.936232    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:16.966712    7212 logs.go:282] 0 containers: []
	W1205 06:47:16.966712    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:16.970413    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:17.000882    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.000882    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:17.004782    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:17.033768    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.033835    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:17.037295    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:17.064692    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.064692    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:17.068384    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:17.094942    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.094942    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:17.099041    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:17.128853    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.128853    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:17.132347    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:17.162220    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.162220    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:17.162302    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:17.162302    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:17.218623    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:17.218623    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:17.279679    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:17.279679    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:17.310820    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:17.310820    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:17.392378    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:17.383714   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.384601   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.387089   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.388284   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.389419   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:17.383714   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.384601   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.387089   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.388284   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.389419   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:17.392378    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:17.392378    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:19.937296    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.960229    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:19.991535    7212 logs.go:282] 0 containers: []
	W1205 06:47:19.991535    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:19.994703    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:20.027498    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.027498    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:20.031400    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:20.061103    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.061103    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:20.064617    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:20.094571    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.094571    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:20.098564    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:20.126979    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.126979    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:20.130800    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:20.163761    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.163761    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:20.167687    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:20.199132    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.199132    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:20.199132    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:20.199132    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:20.283995    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:20.273544   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.275313   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.276695   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.277723   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.278623   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:20.273544   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.275313   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.276695   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.277723   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.278623   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:20.283995    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:20.283995    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:20.327148    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:20.327148    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:20.376774    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:20.376833    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:20.440840    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:20.440840    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:22.976319    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.998933    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:23.029032    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.029032    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:23.032581    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:23.063885    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.063913    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:23.067412    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:23.097477    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.097477    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:23.102023    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:23.131128    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.131128    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:23.135559    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:23.163786    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.163786    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:23.166836    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:23.196149    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.196149    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:23.200130    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:23.226149    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.226149    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:23.226149    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:23.226149    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:23.270734    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:23.270734    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:23.321432    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:23.321432    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:23.384463    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:23.384463    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:23.414734    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:23.414734    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:23.498131    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:23.486398   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.487278   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.492370   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.493315   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.495473   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:23.486398   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.487278   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.492370   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.493315   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.495473   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:26.003605    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.026424    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:26.057455    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.057455    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:26.061184    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:26.089693    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.089693    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:26.093561    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:26.120896    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.120896    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:26.125918    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:26.156135    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.156171    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:26.160046    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:26.190573    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.190652    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:26.194129    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:26.222980    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.222980    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:26.226578    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:26.255995    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.255995    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:26.255995    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:26.255995    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:26.316891    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:26.316891    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:26.344781    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:26.345781    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:26.424418    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:26.415112   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.416239   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.417414   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.418720   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.419921   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:26.415112   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.416239   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.417414   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.418720   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.419921   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:26.424418    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:26.424418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:26.466578    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:26.466578    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:29.021029    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.042745    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:29.072233    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.072233    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:29.076192    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:29.106021    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.106021    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:29.110492    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:29.142373    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.142436    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:29.145869    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:29.177863    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.177863    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:29.182256    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:29.213617    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.213617    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:29.217234    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:29.248409    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.248409    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:29.251948    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:29.279697    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.279697    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:29.279697    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:29.279697    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:29.306595    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:29.306595    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:29.387588    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:29.376998   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.377931   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.380231   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.381708   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.383241   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:29.376998   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.377931   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.380231   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.381708   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.383241   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:29.387588    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:29.387588    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:29.432358    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:29.432358    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:29.491687    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:29.491687    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:32.058315    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:32.080902    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:32.112180    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.112180    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:32.115940    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:32.149909    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.149909    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:32.153337    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:32.182212    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.182212    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:32.185857    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:32.214479    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.214479    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:32.218198    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:32.244828    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.244828    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:32.248159    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:32.276613    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.276613    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:32.282850    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:32.312038    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.312038    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:32.312038    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:32.312038    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:32.395073    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:32.382638   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.383368   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.387782   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.388958   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.389569   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:32.382638   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.383368   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.387782   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.388958   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.389569   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:32.395073    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:32.395073    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:32.438081    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:32.438081    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:32.483065    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:32.483065    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:32.543549    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:32.543549    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:35.082420    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:35.109047    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:35.138903    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.138903    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:35.142559    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:35.169925    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.169925    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:35.176120    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:35.207119    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.207119    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:35.210472    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:35.237822    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.237822    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:35.241605    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:35.269404    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.269404    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:35.272713    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:35.302852    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.302852    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:35.306750    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:35.335749    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.335749    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:35.335749    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:35.335749    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:35.362313    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:35.362313    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:35.447710    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:35.436173   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.437471   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.438375   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.440501   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.441298   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:35.436173   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.437471   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.438375   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.440501   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.441298   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:35.447756    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:35.447784    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:35.488801    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:35.488801    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:35.538430    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:35.538430    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:38.105092    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:38.127701    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:38.158329    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.158329    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:38.162322    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:38.190981    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.190981    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:38.194648    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:38.224869    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.224869    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:38.228377    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:38.259328    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.259328    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:38.262581    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:38.290225    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.290225    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:38.293900    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:38.323002    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.323002    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:38.325942    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:38.356122    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.356122    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:38.356158    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:38.356190    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:38.421485    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:38.421485    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:38.451418    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:38.451418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:38.534923    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:38.524924   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.525955   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.526945   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.528136   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.529104   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:38.524924   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.525955   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.526945   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.528136   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.529104   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:38.534923    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:38.534923    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:38.579182    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:38.579182    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:41.132133    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:41.155916    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:41.190632    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.190671    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:41.194307    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:41.224743    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.224743    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:41.228450    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:41.255924    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.255924    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:41.259608    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:41.287623    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.287623    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:41.291302    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:41.320832    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.320832    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:41.324515    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:41.352503    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.352503    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:41.357486    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:41.384618    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.384618    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:41.384618    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:41.384618    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:41.450555    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:41.450555    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:41.481950    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:41.481950    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:41.556790    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:41.546857   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.547777   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.550205   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.551277   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.552372   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:41.546857   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.547777   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.550205   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.551277   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.552372   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:41.556790    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:41.556790    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:41.597562    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:41.597562    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:44.157547    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:44.182064    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:44.211702    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.211702    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:44.216365    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:44.244631    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.244631    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:44.248073    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:44.276763    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.276763    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:44.280181    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:44.306409    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.306409    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:44.312584    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:44.340481    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.340481    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:44.344742    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:44.376686    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.376686    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:44.380570    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:44.409366    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.409410    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:44.409410    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:44.409410    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:44.472548    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:44.472548    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:44.503264    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:44.503264    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:44.582552    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:44.572346   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.574184   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.575200   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.578087   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.579345   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:44.572346   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.574184   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.575200   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.578087   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.579345   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:44.582552    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:44.582552    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:44.624563    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:44.624563    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:47.178449    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:47.200708    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:47.234713    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.234713    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:47.238519    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:47.267129    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.267129    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:47.270852    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:47.300990    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.300990    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:47.304715    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:47.333260    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.333327    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:47.336691    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:47.366566    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.366566    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:47.370142    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:47.398076    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.398076    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:47.401547    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:47.430057    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.430057    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:47.430057    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:47.430109    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:47.474316    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:47.474316    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:47.528972    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:47.529068    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:47.598649    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:47.598649    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:47.629147    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:47.629147    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:47.719619    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:47.707742   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.709680   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.711980   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.714822   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.715445   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:47.707742   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.709680   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.711980   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.714822   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.715445   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:50.224894    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:50.249386    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:50.280435    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.280435    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:50.283799    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:50.310585    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.310585    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:50.313994    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:50.345240    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.345240    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:50.349156    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:50.377340    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.377340    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:50.381086    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:50.408519    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.408519    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:50.411662    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:50.443298    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.443298    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:50.446970    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:50.475494    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.475494    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:50.475494    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:50.475494    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:50.538866    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:50.538866    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:50.568193    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:50.568193    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:50.646844    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:50.637514   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.638515   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.639306   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.641633   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.642407   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:50.637514   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.638515   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.639306   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.641633   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.642407   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:50.646844    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:50.646844    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:50.692026    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:50.692026    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:53.247044    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:53.269060    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:53.300023    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.300059    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:53.303477    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:53.332467    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.332546    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:53.337763    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:53.367949    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.367993    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:53.371897    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:53.400010    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.400010    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:53.403505    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:53.434809    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.434809    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:53.438803    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:53.466413    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.466413    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:53.470011    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:53.498721    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.498721    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:53.498721    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:53.498721    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:53.528848    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:53.528848    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:53.607294    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:53.597060   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.599213   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.600195   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.602429   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.603325   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:53.597060   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.599213   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.600195   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.602429   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.603325   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:53.607294    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:53.607294    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:53.648012    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:53.648012    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:53.700266    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:53.700790    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:56.267783    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:56.289803    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:56.318251    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.318251    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:56.322075    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:56.349027    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.349027    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:56.352735    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:56.379632    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.379632    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:56.384305    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:56.411837    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.411837    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:56.415300    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:56.443062    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.443062    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:56.446823    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:56.475726    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.475726    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:56.479378    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:56.517912    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.517912    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:56.517912    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:56.517912    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:56.596916    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:56.585115   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.586183   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.587141   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.589286   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.592015   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:56.585115   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.586183   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.587141   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.589286   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.592015   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:56.596916    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:56.596962    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:56.637032    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:56.637032    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:56.684819    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:56.684819    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:56.747303    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:56.747303    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:59.281776    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:59.305247    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:59.335407    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.335407    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:59.338881    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:59.366851    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.366851    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:59.370328    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:59.399291    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.399291    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:59.402960    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:59.432515    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.432515    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:59.436801    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:59.467104    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.467104    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:59.470243    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:59.497877    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.497941    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:59.501112    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:59.529615    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.529697    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:59.529697    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:59.529697    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:59.609983    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:59.598253   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.598877   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.601741   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.603978   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.605591   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:59.598253   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.598877   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.601741   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.603978   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.605591   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:59.610022    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:59.610022    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:59.649863    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:59.649863    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:59.700479    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:59.700479    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:59.763989    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:59.763989    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:02.300047    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:02.322894    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:02.353230    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.353309    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:02.356900    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:02.385700    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.385700    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:02.388841    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:02.416101    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.416101    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:02.419750    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:02.447464    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.447464    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:02.450777    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:02.480237    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.480237    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:02.483526    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:02.511591    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.511591    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:02.515255    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:02.545284    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.545284    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:02.545284    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:02.545284    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:02.610980    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:02.610980    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:02.642418    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:02.642418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:02.726956    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:02.717020   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.718016   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.719379   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.720343   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.721493   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:02.717020   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.718016   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.719379   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.720343   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.721493   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:02.726956    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:02.726956    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:02.771023    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:02.771023    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:05.327683    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:05.351195    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:05.381112    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.381112    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:05.384972    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:05.413259    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.413329    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:05.416730    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:05.445686    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.445686    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:05.449213    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:05.484954    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.484954    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:05.488455    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:05.519190    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.519228    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:05.522884    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:05.554807    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.554807    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:05.558365    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:05.587379    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.587399    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:05.587399    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:05.587425    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:05.641465    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:05.641465    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:05.706506    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:05.706506    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:05.736869    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:05.736941    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:05.824292    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:05.814019   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.816401   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.817445   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.818646   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.819823   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:05.814019   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.816401   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.817445   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.818646   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.819823   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:05.824292    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:05.824292    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:08.371845    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:08.396050    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:08.433853    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.433853    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:08.437453    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:08.468504    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.468504    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:08.471946    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:08.507492    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.507492    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:08.511033    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:08.541947    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.541947    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:08.545843    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:08.575954    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.575954    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:08.579413    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:08.606879    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.606879    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:08.610759    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:08.640063    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.640063    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:08.640115    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:08.640115    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:08.703340    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:08.703340    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:08.733278    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:08.733278    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:08.818249    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:08.805431   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.806342   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.811394   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.812436   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.813338   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:08.805431   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.806342   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.811394   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.812436   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.813338   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:08.818249    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:08.818249    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:08.862665    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:08.862665    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:11.417652    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:11.448987    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:11.478110    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.478110    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:11.483009    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:11.508939    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.508939    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:11.515716    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:11.546004    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.546004    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:11.550908    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:11.580644    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.580644    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:11.586014    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:11.614154    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.614154    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:11.618353    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:11.651170    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.651170    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:11.656537    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:11.686019    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.686019    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:11.686019    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:11.687024    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:11.732747    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:11.732747    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:11.793464    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:11.793464    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:11.823414    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:11.823414    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:11.898268    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:11.889270   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.890352   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.891383   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.892797   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.893668   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:11.889270   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.890352   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.891383   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.892797   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.893668   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:11.898268    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:11.898268    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:14.445893    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:14.474707    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:14.507067    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.507090    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:14.510610    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:14.541536    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.541536    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:14.544693    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:14.573562    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.573562    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:14.577631    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:14.611830    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.611830    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:14.615419    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:14.646076    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.646076    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:14.649650    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:14.677233    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.677233    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:14.681207    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:14.716473    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.716473    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:14.716473    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:14.716473    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:14.780720    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:14.780720    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:14.810274    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:14.810274    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:14.892394    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:14.882017   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.882944   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.885374   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.887829   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.889201   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:14.882017   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.882944   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.885374   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.887829   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.889201   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:14.892440    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:14.892463    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:14.935499    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:14.935499    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:17.497000    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:17.515654    7212 kubeadm.go:602] duration metric: took 4m3.8265772s to restartPrimaryControlPlane
	W1205 06:48:17.515654    7212 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:48:17.520476    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 06:48:18.188924    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:48:18.211141    7212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:48:18.226163    7212 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:48:18.231371    7212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:48:18.247460    7212 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:48:18.247460    7212 kubeadm.go:158] found existing configuration files:
	
	I1205 06:48:18.251775    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:48:18.267019    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:48:18.270577    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:48:18.291093    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:48:18.304172    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:48:18.307161    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:48:18.323174    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:48:18.334168    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:48:18.338162    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:48:18.354164    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:48:18.366170    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:48:18.369169    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:48:18.385163    7212 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:48:18.520419    7212 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 06:48:18.600326    7212 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:48:18.711687    7212 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:52:19.557610    7212 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:52:19.557683    7212 kubeadm.go:319] 
	I1205 06:52:19.557826    7212 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:52:19.561892    7212 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:52:19.562423    7212 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:52:19.562542    7212 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:52:19.562542    7212 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 06:52:19.562542    7212 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 06:52:19.562542    7212 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 06:52:19.563630    7212 kubeadm.go:319] CONFIG_INET: enabled
	I1205 06:52:19.563742    7212 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 06:52:19.563815    7212 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 06:52:19.564032    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 06:52:19.564214    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 06:52:19.564316    7212 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 06:52:19.565465    7212 kubeadm.go:319] OS: Linux
	I1205 06:52:19.565539    7212 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:52:19.565664    7212 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:52:19.565817    7212 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:52:19.565879    7212 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:52:19.566004    7212 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:52:19.566103    7212 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:52:19.566193    7212 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:52:19.566291    7212 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:52:19.566380    7212 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:52:19.566467    7212 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:52:19.566467    7212 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:52:19.566467    7212 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:52:19.566467    7212 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:52:19.570411    7212 out.go:252]   - Generating certificates and keys ...
	I1205 06:52:19.570411    7212 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:52:19.571550    7212 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:52:19.572575    7212 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:52:19.572575    7212 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:52:19.575966    7212 out.go:252]   - Booting up control plane ...
	I1205 06:52:19.575966    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:52:19.576966    7212 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001375391s
	I1205 06:52:19.576966    7212 kubeadm.go:319] 
	I1205 06:52:19.576966    7212 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:52:19.576966    7212 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:52:19.576966    7212 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:52:19.576966    7212 kubeadm.go:319] 
	I1205 06:52:19.577967    7212 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:52:19.577967    7212 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:52:19.577967    7212 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:52:19.577967    7212 kubeadm.go:319] 
	W1205 06:52:19.577967    7212 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001375391s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:52:19.583339    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 06:52:20.041041    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:52:20.059958    7212 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:52:20.064870    7212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:52:20.077700    7212 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:52:20.077700    7212 kubeadm.go:158] found existing configuration files:
	
	I1205 06:52:20.082397    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:52:20.097746    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:52:20.102900    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:52:20.121456    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:52:20.135442    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:52:20.139595    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:52:20.159529    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:52:20.172924    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:52:20.176919    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:52:20.195400    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:52:20.209944    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:52:20.214293    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:52:20.235566    7212 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:52:20.355259    7212 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 06:52:20.442209    7212 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:52:20.540382    7212 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:56:21.333777    7212 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 06:56:21.334317    7212 kubeadm.go:319] 
	I1205 06:56:21.334526    7212 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:56:21.342892    7212 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:56:21.342892    7212 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:56:21.342892    7212 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:56:21.342892    7212 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 06:56:21.342892    7212 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 06:56:21.342892    7212 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 06:56:21.342892    7212 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_INET: enabled
	I1205 06:56:21.344426    7212 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 06:56:21.345102    7212 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 06:56:21.345788    7212 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 06:56:21.345788    7212 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 06:56:21.345946    7212 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 06:56:21.346019    7212 kubeadm.go:319] OS: Linux
	I1205 06:56:21.346100    7212 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:56:21.346100    7212 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:56:21.346199    7212 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:56:21.346284    7212 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:56:21.346368    7212 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:56:21.346451    7212 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:56:21.346535    7212 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:56:21.346682    7212 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:56:21.346682    7212 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:56:21.346843    7212 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:56:21.347086    7212 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:56:21.347253    7212 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:56:21.347408    7212 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:56:21.350418    7212 out.go:252]   - Generating certificates and keys ...
	I1205 06:56:21.350418    7212 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:56:21.350418    7212 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:56:21.350952    7212 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:56:21.351645    7212 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:56:21.351645    7212 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:56:21.351645    7212 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:56:21.353617    7212 out.go:252]   - Booting up control plane ...
	I1205 06:56:21.353617    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:56:21.355622    7212 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:56:21.355622    7212 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:56:21.355622    7212 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000747056s
	I1205 06:56:21.355622    7212 kubeadm.go:319] 
	I1205 06:56:21.355622    7212 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:56:21.355622    7212 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:56:21.355622    7212 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:56:21.355622    7212 kubeadm.go:319] 
	I1205 06:56:21.355622    7212 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:56:21.355622    7212 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:56:21.356621    7212 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:56:21.356621    7212 kubeadm.go:319] 
	I1205 06:56:21.356621    7212 kubeadm.go:403] duration metric: took 12m7.7172113s to StartCluster
	I1205 06:56:21.356621    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:56:21.360622    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:56:21.601792    7212 cri.go:89] found id: ""
	I1205 06:56:21.601830    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.601858    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:56:21.601858    7212 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:56:21.606583    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:56:21.653730    7212 cri.go:89] found id: ""
	I1205 06:56:21.653730    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.653730    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:56:21.653730    7212 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:56:21.658389    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:56:21.703398    7212 cri.go:89] found id: ""
	I1205 06:56:21.703398    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.703398    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:56:21.703398    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:56:21.707890    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:56:21.747639    7212 cri.go:89] found id: ""
	I1205 06:56:21.747639    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.747639    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:56:21.747639    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:56:21.752626    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:56:21.800627    7212 cri.go:89] found id: ""
	I1205 06:56:21.800627    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.800627    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:56:21.800627    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:56:21.805173    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:56:21.844454    7212 cri.go:89] found id: ""
	I1205 06:56:21.844454    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.844454    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:56:21.844454    7212 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:56:21.848782    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:56:21.891771    7212 cri.go:89] found id: ""
	I1205 06:56:21.891771    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.891771    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:56:21.891771    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:56:21.891771    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:56:21.969778    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:56:21.969778    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:56:22.005948    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:56:22.005948    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:56:22.265248    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:56:22.255844   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.256835   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259037   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259983   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.260673   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:56:22.255844   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.256835   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259037   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259983   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.260673   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:56:22.265248    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:56:22.265248    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:56:22.308852    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:56:22.308852    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 06:56:22.367035    7212 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:56:22.367035    7212 out.go:285] * 
	W1205 06:56:22.367247    7212 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:56:22.367617    7212 out.go:285] * 
	W1205 06:56:22.369745    7212 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:56:22.374297    7212 out.go:203] 
	W1205 06:56:22.378243    7212 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:56:22.378410    7212 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:56:22.378410    7212 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:56:22.381512    7212 out.go:203] 
	
	
	==> Docker <==
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406062974Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406068774Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406091077Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406121880Z" level=info msg="Initializing buildkit"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.521727722Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529404028Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529609450Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529612750Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529693058Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:44:10 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:10 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:44:11 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:44:11 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:56:24.384022   41336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:24.385030   41336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:24.387717   41336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:24.390183   41336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:24.391308   41336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:44] CPU: 6 PID: 67767 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000825] RIP: 0033:0x7f9683d26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7f9683d26af6.
	[  +0.000653] RSP: 002b:00007ffedb1b9ba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000774] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000786] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000895] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000804] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000818] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000794] FS:  0000000000000000 GS:  0000000000000000
	[  +0.946792] CPU: 8 PID: 67891 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f0ceb5efb20
	[  +0.000393] Code: Unable to access opcode bytes at RIP 0x7f0ceb5efaf6.
	[  +0.000679] RSP: 002b:00007fff219f5bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000791] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000868] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001135] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001172] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001044] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:56:24 up  2:30,  0 user,  load average: 0.26, 0.30, 0.43
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:56:21 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:56:22 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 06:56:22 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:22 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:22 functional-247800 kubelet[41173]: E1205 06:56:22.164806   41173 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:56:22 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:56:22 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:56:22 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 06:56:22 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:22 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:22 functional-247800 kubelet[41206]: E1205 06:56:22.913578   41206 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:56:22 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:56:22 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:56:23 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 05 06:56:23 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:23 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:23 functional-247800 kubelet[41232]: E1205 06:56:23.689030   41232 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:56:23 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:56:23 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:56:24 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 05 06:56:24 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:24 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:56:24 functional-247800 kubelet[41341]: E1205 06:56:24.423409   41341 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:56:24 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:56:24 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (614.3224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (743.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (53.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-247800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-247800 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (50.3497911s)

                                                
                                                
** stderr ** 
	E1205 06:56:36.306278    4532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:56:46.391552    4532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:56:56.431457    4532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:57:06.470054    4532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:57:16.510409    4532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-247800 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (599.5083ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.2530559s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-088800 image ls --format yaml --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ ssh     │ functional-088800 ssh pgrep buildkitd                                                                                   │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │                     │
	│ image   │ functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr                  │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls                                                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format json --alsologtostderr                                                              │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ image   │ functional-088800 image ls --format table --alsologtostderr                                                             │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:21 UTC │ 05 Dec 25 06:21 UTC │
	│ delete  │ -p functional-088800                                                                                                    │ functional-088800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:25 UTC │ 05 Dec 25 06:26 UTC │
	│ start   │ -p functional-247800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:26 UTC │                     │
	│ start   │ -p functional-247800 --alsologtostderr -v=8                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:34 UTC │                     │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.1                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:41 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:3.3                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:41 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add registry.k8s.io/pause:latest                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache add minikube-local-cache-test:functional-247800                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ functional-247800 cache delete minikube-local-cache-test:functional-247800                                              │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl images                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ cache   │ functional-247800 cache reload                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ kubectl │ functional-247800 kubectl -- --context functional-247800 get pods                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ start   │ -p functional-247800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:44:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:44:02.272034    7212 out.go:360] Setting OutFile to fd 1444 ...
	I1205 06:44:02.317383    7212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:02.317383    7212 out.go:374] Setting ErrFile to fd 2004...
	I1205 06:44:02.317383    7212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:02.332249    7212 out.go:368] Setting JSON to false
	I1205 06:44:02.336248    7212 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8300,"bootTime":1764908742,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:44:02.336248    7212 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:44:02.343248    7212 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:44:02.346834    7212 notify.go:221] Checking for updates...
	I1205 06:44:02.346834    7212 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:44:02.349109    7212 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:44:02.350847    7212 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:44:02.353405    7212 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:44:02.355242    7212 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:44:02.357599    7212 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:02.357599    7212 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:44:02.542801    7212 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:44:02.547077    7212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:02.784844    7212 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-05 06:44:02.759817606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:44:02.788514    7212 out.go:179] * Using the docker driver based on existing profile
	I1205 06:44:02.790794    7212 start.go:309] selected driver: docker
	I1205 06:44:02.790794    7212 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:02.790794    7212 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:44:02.797110    7212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:03.043306    7212 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-05 06:44:03.019620575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:44:03.123839    7212 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:44:03.123839    7212 cni.go:84] Creating CNI manager for ""
	I1205 06:44:03.123839    7212 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:44:03.123839    7212 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:03.128293    7212 out.go:179] * Starting "functional-247800" primary control-plane node in "functional-247800" cluster
	I1205 06:44:03.130664    7212 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:44:03.134094    7212 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:44:03.137567    7212 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:44:03.137567    7212 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:44:03.180283    7212 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:44:03.219602    7212 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:44:03.219602    7212 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:44:03.490854    7212 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:44:03.491134    7212 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\config.json ...
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:44:03.491313    7212 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:44:03.493285    7212 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:44:03.493386    7212 start.go:360] acquireMachinesLock for functional-247800: {Name:mk72f4cc17efe788c0da7f51dc6962af3f611c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:03.493386    7212 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-247800"
	I1205 06:44:03.493386    7212 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:44:03.493386    7212 fix.go:54] fixHost starting: 
	I1205 06:44:03.504606    7212 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
	I1205 06:44:03.588000    7212 fix.go:112] recreateIfNeeded on functional-247800: state=Running err=<nil>
	W1205 06:44:03.588000    7212 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:44:03.607696    7212 out.go:252] * Updating the running docker "functional-247800" container ...
	I1205 06:44:03.607696    7212 machine.go:94] provisionDockerMachine start ...
	I1205 06:44:03.620462    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:03.791695    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:03.792694    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:03.792694    7212 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:44:04.191189    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:44:04.191189    7212 ubuntu.go:182] provisioning hostname "functional-247800"
	I1205 06:44:04.196954    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:04.962117    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:04.963119    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:04.963119    7212 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-247800 && echo "functional-247800" | sudo tee /etc/hostname
	I1205 06:44:05.528862    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-247800
	
	I1205 06:44:05.533862    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:05.785961    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:05.785961    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:05.785961    7212 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-247800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-247800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-247800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:44:05.993200    7212 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:05.993991    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 06:44:05.994386    7212 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.5030361s
	I1205 06:44:05.994386    7212 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:44:05.994965    7212 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:05.996380    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:44:05.996380    7212 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.5050299s
	I1205 06:44:05.996380    7212 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:44:06.001965    7212 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.001965    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 06:44:06.001965    7212 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.5106152s
	I1205 06:44:06.001965    7212 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 06:44:06.024972    7212 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.025248    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 06:44:06.025248    7212 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.5338979s
	I1205 06:44:06.025248    7212 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 06:44:06.030397    7212 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.030653    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 06:44:06.030804    7212 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.5394539s
	I1205 06:44:06.030804    7212 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 06:44:06.057622    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:44:06.057686    7212 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 06:44:06.057828    7212 ubuntu.go:190] setting up certificates
	I1205 06:44:06.057876    7212 provision.go:84] configureAuth start
	I1205 06:44:06.063201    7212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:44:06.079402    7212 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.079402    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:44:06.079402    7212 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.5880514s
	I1205 06:44:06.079402    7212 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:44:06.127402    7212 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.127402    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 06:44:06.127402    7212 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.6360504s
	I1205 06:44:06.127402    7212 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 06:44:06.127402    7212 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:44:06.128401    7212 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:44:06.128401    7212 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.6370492s
	I1205 06:44:06.128401    7212 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:44:06.128401    7212 cache.go:87] Successfully saved all images to host disk.
	I1205 06:44:06.133387    7212 provision.go:143] copyHostCerts
	I1205 06:44:06.133387    7212 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 06:44:06.133387    7212 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 06:44:06.134387    7212 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 06:44:06.134387    7212 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 06:44:06.135392    7212 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 06:44:06.135392    7212 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 06:44:06.135392    7212 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 06:44:06.135392    7212 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 06:44:06.136402    7212 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 06:44:06.136402    7212 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-247800 san=[127.0.0.1 192.168.49.2 functional-247800 localhost minikube]
	I1205 06:44:06.163392    7212 provision.go:177] copyRemoteCerts
	I1205 06:44:06.167399    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:44:06.170398    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:06.226397    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:06.360157    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:44:06.390856    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:44:06.422898    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:44:06.452624    7212 provision.go:87] duration metric: took 394.7423ms to configureAuth
	I1205 06:44:06.452624    7212 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:44:06.452624    7212 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:06.457638    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:06.514727    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:06.514768    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:06.514768    7212 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 06:44:06.696044    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 06:44:06.696090    7212 ubuntu.go:71] root file system type: overlay
	I1205 06:44:06.696090    7212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 06:44:06.699335    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:06.754511    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:06.755263    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:06.755357    7212 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 06:44:06.951048    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 06:44:06.954929    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.012752    7212 main.go:143] libmachine: Using SSH client type: native
	I1205 06:44:07.013752    7212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 55394 <nil> <nil>}
	I1205 06:44:07.013752    7212 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 06:44:07.221929    7212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:44:07.221949    7212 machine.go:97] duration metric: took 3.6142004s to provisionDockerMachine
	I1205 06:44:07.221974    7212 start.go:293] postStartSetup for "functional-247800" (driver="docker")
	I1205 06:44:07.221974    7212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:44:07.226668    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:44:07.229222    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.288022    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.425061    7212 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:44:07.435656    7212 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:44:07.435656    7212 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:44:07.435656    7212 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 06:44:07.436190    7212 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 06:44:07.437151    7212 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 06:44:07.437615    7212 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts -> hosts in /etc/test/nested/copy/8036
	I1205 06:44:07.442100    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8036
	I1205 06:44:07.458772    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 06:44:07.490927    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts --> /etc/test/nested/copy/8036/hosts (40 bytes)
	I1205 06:44:07.521512    7212 start.go:296] duration metric: took 299.5056ms for postStartSetup
	I1205 06:44:07.526199    7212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:44:07.528904    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.584765    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.708107    7212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:44:07.715598    7212 fix.go:56] duration metric: took 4.2221494s for fixHost
	I1205 06:44:07.716591    7212 start.go:83] releasing machines lock for "functional-247800", held for 4.2221494s
	I1205 06:44:07.719938    7212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-247800
	I1205 06:44:07.774650    7212 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 06:44:07.778633    7212 ssh_runner.go:195] Run: cat /version.json
	I1205 06:44:07.779199    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.781778    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:07.835000    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.846698    7212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
	I1205 06:44:07.959833    7212 ssh_runner.go:195] Run: systemctl --version
	W1205 06:44:07.966184    7212 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 06:44:07.976576    7212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:44:07.985928    7212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:44:07.990302    7212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:44:08.006960    7212 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:44:08.006960    7212 start.go:496] detecting cgroup driver to use...
	I1205 06:44:08.006960    7212 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:44:08.007486    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:44:08.037172    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:44:08.060370    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:44:08.076873    7212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:44:08.081935    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1205 06:44:08.088262    7212 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 06:44:08.088262    7212 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 06:44:08.102235    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:44:08.120429    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:44:08.138453    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:44:08.157604    7212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:44:08.178745    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:44:08.197474    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:44:08.219535    7212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:44:08.241784    7212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:44:08.262205    7212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:44:08.281639    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:08.508817    7212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:44:08.800623    7212 start.go:496] detecting cgroup driver to use...
	I1205 06:44:08.800623    7212 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:44:08.805535    7212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 06:44:08.829203    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:44:08.853336    7212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:44:08.916688    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:44:08.939467    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:44:08.959334    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:44:08.987138    7212 ssh_runner.go:195] Run: which cri-dockerd
	I1205 06:44:08.999563    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 06:44:09.015960    7212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 06:44:09.041179    7212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 06:44:09.185621    7212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 06:44:09.352956    7212 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 06:44:09.352956    7212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 06:44:09.378298    7212 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 06:44:09.400179    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:09.536455    7212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 06:44:10.536962    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:44:10.559790    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 06:44:10.581363    7212 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 06:44:10.609733    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:44:10.632909    7212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 06:44:10.776807    7212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 06:44:10.916613    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:11.075698    7212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 06:44:11.101329    7212 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 06:44:11.124502    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:11.266418    7212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 06:44:11.403053    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 06:44:11.422521    7212 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 06:44:11.426547    7212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 06:44:11.436721    7212 start.go:564] Will wait 60s for crictl version
	I1205 06:44:11.441180    7212 ssh_runner.go:195] Run: which crictl
	I1205 06:44:11.452770    7212 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:44:11.501872    7212 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 06:44:11.505976    7212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:44:11.549324    7212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 06:44:11.588516    7212 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 06:44:11.591303    7212 cli_runner.go:164] Run: docker exec -t functional-247800 dig +short host.docker.internal
	I1205 06:44:11.795650    7212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 06:44:11.800040    7212 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 06:44:11.812421    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:11.871146    7212 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:44:11.873758    7212 kubeadm.go:884] updating cluster {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:44:11.873758    7212 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:44:11.877101    7212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 06:44:11.912208    7212 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-247800
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1205 06:44:11.912283    7212 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:44:11.912321    7212 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1205 06:44:11.912565    7212 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-247800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:44:11.916049    7212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 06:44:12.318628    7212 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:44:12.318628    7212 cni.go:84] Creating CNI manager for ""
	I1205 06:44:12.318628    7212 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:44:12.318628    7212 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:44:12.318628    7212 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-247800 NodeName:functional-247800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:44:12.318628    7212 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-247800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:44:12.323147    7212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:44:12.338722    7212 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:44:12.342793    7212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:44:12.357028    7212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 06:44:12.378067    7212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:44:12.397995    7212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1205 06:44:12.425172    7212 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:44:12.436596    7212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:44:12.576722    7212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:44:12.598561    7212 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800 for IP: 192.168.49.2
	I1205 06:44:12.598561    7212 certs.go:195] generating shared ca certs ...
	I1205 06:44:12.598561    7212 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:44:12.599202    7212 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 06:44:12.599202    7212 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 06:44:12.599202    7212 certs.go:257] generating profile certs ...
	I1205 06:44:12.600184    7212 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\client.key
	I1205 06:44:12.600278    7212 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key.870be15d
	I1205 06:44:12.600278    7212 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key
	I1205 06:44:12.601471    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 06:44:12.601693    7212 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 06:44:12.601727    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 06:44:12.601917    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 06:44:12.602080    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 06:44:12.602241    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 06:44:12.602561    7212 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 06:44:12.604739    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:44:12.633587    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:44:12.661761    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:44:12.693749    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 06:44:12.724397    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:44:12.753386    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:44:12.782245    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:44:12.808447    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-247800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:44:12.837845    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 06:44:12.868598    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 06:44:12.897877    7212 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:44:12.923594    7212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:44:12.948661    7212 ssh_runner.go:195] Run: openssl version
	I1205 06:44:12.969868    7212 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 06:44:12.988567    7212 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 06:44:13.010500    7212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 06:44:13.020473    7212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 06:44:13.025024    7212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 06:44:13.078161    7212 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:44:13.096521    7212 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.112321    7212 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:44:13.130493    7212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.138877    7212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.143299    7212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:44:13.190013    7212 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:44:13.206117    7212 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.222622    7212 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 06:44:13.239301    7212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.246185    7212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.249183    7212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 06:44:13.298930    7212 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:44:13.315886    7212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:44:13.326014    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:44:13.380225    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:44:13.429527    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:44:13.479032    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:44:13.536127    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:44:13.583832    7212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:44:13.629178    7212 kubeadm.go:401] StartCluster: {Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:13.633659    7212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:44:13.671791    7212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:44:13.685483    7212 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:44:13.685483    7212 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:44:13.690488    7212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:44:13.703539    7212 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.707834    7212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
	I1205 06:44:13.761963    7212 kubeconfig.go:125] found "functional-247800" server: "https://127.0.0.1:55398"
	I1205 06:44:13.770445    7212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:44:13.785736    7212 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:26:36.498184726 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:44:12.408045869 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:44:13.785736    7212 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:44:13.789544    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 06:44:13.823105    7212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:44:13.848716    7212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:44:13.861649    7212 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  5 06:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  5 06:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  5 06:30 /etc/kubernetes/scheduler.conf
	
	I1205 06:44:13.866874    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:44:13.884456    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:44:13.897824    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.902988    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:44:13.923754    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:44:13.938317    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.942723    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:44:13.963344    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:44:13.977185    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:44:13.982171    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:44:13.999803    7212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:44:14.022527    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:14.262599    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:14.847747    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:15.087926    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:15.158153    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:44:15.213568    7212 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:44:15.218358    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:15.718866    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:16.219377    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:16.718800    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:17.219013    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:17.719391    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:18.218162    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:18.719428    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:19.218871    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:19.718583    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:20.218977    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:20.719394    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:21.218952    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:21.719352    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:22.221520    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:22.719541    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:23.221712    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:23.719439    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:24.219993    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:24.719639    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:25.220510    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:25.718604    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:26.218568    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:26.718719    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:27.218702    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:27.720279    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:28.218685    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:28.718815    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:29.218860    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:29.719920    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:30.219980    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:30.719760    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:31.219358    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:31.718380    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:32.219353    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:32.719686    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:33.219191    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:33.720015    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:34.219354    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:34.719612    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:35.220187    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:35.719684    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:36.217550    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:36.719346    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:37.218126    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:37.719656    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:38.219064    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:38.720199    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:39.217424    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:39.720728    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:40.219550    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:40.718837    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:41.219035    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:41.719498    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:42.219112    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:42.719629    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:43.219394    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:43.719065    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:44.219354    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:44.719496    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:45.219596    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:45.719465    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:46.219434    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:46.721684    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:47.219415    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:47.720218    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:48.219016    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:48.719478    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:49.219299    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:49.720284    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:50.219027    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:50.719182    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:51.219943    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:51.720136    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:52.220171    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:52.719617    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:53.219322    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:53.719426    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:54.219376    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:54.720770    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:55.219975    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:55.720474    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:56.219929    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:56.718428    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:57.221381    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:57.719722    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:58.220548    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:58.719924    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:59.218934    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:44:59.721252    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:00.219208    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:00.719872    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:01.219506    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:01.719128    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:02.221833    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:02.719691    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:03.220401    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:03.719909    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:04.219559    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:04.719712    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:05.221455    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:05.719176    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:06.219747    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:06.720230    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:07.219566    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:07.722445    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:08.219220    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:08.720125    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:09.219700    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:09.719445    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:10.219069    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:10.720384    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:11.219469    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:11.718868    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:12.220494    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:12.719956    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:13.220207    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:13.719219    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:14.219300    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:14.719308    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:15.220591    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:15.359365    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.359365    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:15.363072    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:15.396147    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.396147    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:15.400087    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:15.427850    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.427850    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:15.432163    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:15.465699    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.465738    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:15.470379    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:15.497629    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.497629    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:15.501723    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:15.532988    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.532988    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:15.536536    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:15.566283    7212 logs.go:282] 0 containers: []
	W1205 06:45:15.566283    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:15.566312    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:15.566312    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:15.596491    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:15.596491    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:15.856069    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:15.847392   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.848413   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.849846   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.851083   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.852149   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:15.847392   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.848413   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.849846   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.851083   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:15.852149   24224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:15.856069    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:15.856069    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:15.909731    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:15.909731    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:16.118756    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:16.118756    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:18.687756    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:18.710448    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:18.741748    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.741748    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:18.745513    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:18.775360    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.775360    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:18.779658    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:18.809441    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.809501    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:18.813014    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:18.838816    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.838816    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:18.844145    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:18.873602    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.873602    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:18.877250    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:18.905073    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.905073    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:18.909137    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:18.936411    7212 logs.go:282] 0 containers: []
	W1205 06:45:18.936411    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:18.936411    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:18.936411    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:18.998916    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:18.998916    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:19.033230    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:19.033230    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:19.127028    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:19.115750   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.116628   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.119350   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.120153   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.122188   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:19.115750   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.116628   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.119350   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.120153   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:19.122188   24384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:19.127028    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:19.127028    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:19.167683    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:19.167683    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:21.730298    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:21.753423    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:21.784095    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.784095    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:21.787764    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:21.817963    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.817963    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:21.821515    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:21.850539    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.850539    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:21.854672    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:21.884098    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.884098    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:21.887228    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:21.917593    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.917593    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:21.921273    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:21.949149    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.949149    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:21.955019    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:21.983212    7212 logs.go:282] 0 containers: []
	W1205 06:45:21.983212    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:21.983212    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:21.983212    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:22.012499    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:22.012499    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:22.098043    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:22.089093   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.090339   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.091481   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.093690   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.095138   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:22.089093   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.090339   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.091481   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.093690   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:22.095138   24535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:22.098043    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:22.098090    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:22.141887    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:22.141887    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:22.194066    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:22.194066    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:24.762325    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:24.785756    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:24.815636    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.815636    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:24.819508    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:24.847760    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.847760    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:24.851370    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:24.881012    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.881012    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:24.884680    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:24.912270    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.912270    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:24.916105    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:24.953416    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.953416    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:24.956423    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:24.990968    7212 logs.go:282] 0 containers: []
	W1205 06:45:24.990968    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:24.994533    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:25.027815    7212 logs.go:282] 0 containers: []
	W1205 06:45:25.027815    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:25.027815    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:25.027815    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:25.071824    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:25.071824    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:25.123386    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:25.123386    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:25.186859    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:25.186859    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:25.219822    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:25.219822    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:25.305505    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:25.292510   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.293380   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.296569   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.299079   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.300434   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:25.292510   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.293380   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.296569   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.299079   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:25.300434   24720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:27.811804    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:27.835330    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:27.868714    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.868714    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:27.872277    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:27.903268    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.903268    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:27.906779    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:27.936644    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.936644    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:27.940640    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:27.969693    7212 logs.go:282] 0 containers: []
	W1205 06:45:27.969693    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:27.973532    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:28.000473    7212 logs.go:282] 0 containers: []
	W1205 06:45:28.000547    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:28.004187    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:28.046248    7212 logs.go:282] 0 containers: []
	W1205 06:45:28.046248    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:28.050184    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:28.081528    7212 logs.go:282] 0 containers: []
	W1205 06:45:28.081528    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:28.081528    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:28.081528    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:28.144979    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:28.144979    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:28.176452    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:28.177413    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:28.262251    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:28.249064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.249788   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253017   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253886   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.257064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:28.249064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.249788   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253017   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.253886   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:28.257064   24851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:28.262273    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:28.262273    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:28.303948    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:28.303948    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:30.856838    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:30.882687    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:30.912491    7212 logs.go:282] 0 containers: []
	W1205 06:45:30.912491    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:30.915939    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:30.944823    7212 logs.go:282] 0 containers: []
	W1205 06:45:30.944823    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:30.948477    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:30.977785    7212 logs.go:282] 0 containers: []
	W1205 06:45:30.977785    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:30.981554    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:31.007905    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.007905    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:31.012068    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:31.041316    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.041365    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:31.044854    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:31.073275    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.073313    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:31.076949    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:31.106563    7212 logs.go:282] 0 containers: []
	W1205 06:45:31.106563    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:31.106563    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:31.106563    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:31.168102    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:31.169096    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:31.199523    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:31.199523    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:31.278155    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:31.270054   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271046   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271939   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.274002   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.275003   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:31.270054   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271046   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.271939   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.274002   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:31.275003   25010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:31.278155    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:31.278155    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:31.318537    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:31.318537    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:33.870845    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:33.895814    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:33.927486    7212 logs.go:282] 0 containers: []
	W1205 06:45:33.927522    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:33.931201    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:33.958957    7212 logs.go:282] 0 containers: []
	W1205 06:45:33.958957    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:33.962725    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:33.989625    7212 logs.go:282] 0 containers: []
	W1205 06:45:33.989687    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:33.993181    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:34.023503    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.023522    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:34.027565    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:34.061896    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.061896    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:34.065443    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:34.096984    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.096984    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:34.101057    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:34.131058    7212 logs.go:282] 0 containers: []
	W1205 06:45:34.131123    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:34.131123    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:34.131123    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:34.196576    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:34.196576    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:34.225898    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:34.225898    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:34.311791    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:34.301898   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.303342   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.305061   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.306151   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.307428   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:34.301898   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.303342   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.305061   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.306151   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:34.307428   25166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:34.311791    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:34.311791    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:34.354337    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:34.354337    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:36.906318    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:36.928488    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:36.957454    7212 logs.go:282] 0 containers: []
	W1205 06:45:36.957454    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:36.961333    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:36.988983    7212 logs.go:282] 0 containers: []
	W1205 06:45:36.988983    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:36.992781    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:37.022093    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.022125    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:37.025722    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:37.057399    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.057399    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:37.060912    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:37.087172    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.087226    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:37.090387    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:37.119994    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.119994    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:37.123787    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:37.151631    7212 logs.go:282] 0 containers: []
	W1205 06:45:37.151631    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:37.151631    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:37.151631    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:37.195262    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:37.195262    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:37.246012    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:37.246080    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:37.316036    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:37.316036    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:37.345867    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:37.345867    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:37.426410    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:37.416173   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.417045   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.420344   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.421777   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.422945   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:37.416173   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.417045   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.420344   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.421777   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:37.422945   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:39.932657    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:39.956038    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:39.984625    7212 logs.go:282] 0 containers: []
	W1205 06:45:39.984653    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:39.988440    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:40.017153    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.017153    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:40.020736    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:40.052553    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.052621    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:40.056300    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:40.085219    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.085219    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:40.089578    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:40.121915    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.121915    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:40.125581    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:40.154622    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.154673    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:40.158465    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:40.188578    7212 logs.go:282] 0 containers: []
	W1205 06:45:40.188578    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:40.188578    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:40.188578    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:40.245066    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:40.245066    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:40.305771    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:40.305771    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:40.337088    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:40.337088    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:40.418759    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:40.409806   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.410826   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.412482   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.414429   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.415726   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:40.409806   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.410826   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.412482   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.414429   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:40.415726   25480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:40.419320    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:40.419320    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:42.967507    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:42.991075    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:43.021034    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.021108    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:43.024790    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:43.053883    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.053883    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:43.057674    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:43.088625    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.088625    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:43.092086    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:43.119636    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.119636    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:43.122763    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:43.150111    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.150111    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:43.154265    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:43.182836    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.182836    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:43.186792    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:43.225828    7212 logs.go:282] 0 containers: []
	W1205 06:45:43.225828    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:43.225828    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:43.225828    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:43.290065    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:43.290065    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:43.321138    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:43.321138    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:43.398577    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:43.389973   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.390880   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.393291   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.394242   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.395582   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:43.389973   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.390880   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.393291   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.394242   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:43.395582   25618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:43.398577    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:43.398577    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:43.439980    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:43.439980    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:46.001165    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:46.028196    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:46.061568    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.061568    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:46.065437    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:46.095425    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.095470    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:46.099504    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:46.130002    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.130002    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:46.133511    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:46.162609    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.162689    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:46.166324    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:46.195578    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.195578    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:46.199354    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:46.228354    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.228354    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:46.232169    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:46.261558    7212 logs.go:282] 0 containers: []
	W1205 06:45:46.261595    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:46.261595    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:46.261623    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:46.304385    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:46.304385    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:46.359760    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:46.359760    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:46.422582    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:46.422582    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:46.452110    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:46.452110    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:46.530734    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:46.522774   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.523673   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.525329   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.526302   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.527328   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:46.522774   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.523673   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.525329   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.526302   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:46.527328   25782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:49.036286    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:49.060305    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:49.095037    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.095063    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:49.098656    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:49.128743    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.128778    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:49.132200    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:49.165097    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.165097    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:49.168869    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:49.200301    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.200301    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:49.203308    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:49.237385    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.237385    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:49.240910    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:49.270260    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.270293    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:49.273438    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:49.302145    7212 logs.go:282] 0 containers: []
	W1205 06:45:49.302145    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:49.302145    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:49.302145    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:49.366684    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:49.366684    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:49.396497    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:49.396497    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:49.481456    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:49.471608   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.472504   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.475721   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.477167   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.478188   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:49.471608   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.472504   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.475721   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.477167   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:49.478188   25915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:49.481456    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:49.481496    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:49.524124    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:49.525124    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:52.084310    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:52.107012    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:52.137266    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.137266    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:52.142096    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:52.169325    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.169325    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:52.174093    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:52.204247    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.205151    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:52.208943    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:52.238232    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.238322    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:52.241769    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:52.269688    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.269688    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:52.273627    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:52.303607    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.303607    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:52.307182    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:52.337626    7212 logs.go:282] 0 containers: []
	W1205 06:45:52.337626    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:52.337626    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:52.337626    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:52.398186    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:52.398186    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:52.428798    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:52.428798    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:52.514157    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:52.505332   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.506562   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.508566   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.509865   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.511864   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:52.505332   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.506562   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.508566   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.509865   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:52.511864   26063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:52.514157    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:52.514157    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:52.558771    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:52.558771    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:55.113907    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:55.143620    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:55.174228    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.174228    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:55.179458    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:55.209480    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.209480    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:55.213349    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:55.242540    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.242540    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:55.246462    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:55.276353    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.276353    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:55.280471    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:55.308841    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.308841    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:55.312911    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:55.341094    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.341094    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:55.344858    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:55.375031    7212 logs.go:282] 0 containers: []
	W1205 06:45:55.375031    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:55.375031    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:55.375031    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:55.437561    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:55.437561    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:55.473071    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:55.473071    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:55.550825    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:55.539067   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.541138   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.542837   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.543977   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.545029   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:55.539067   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.541138   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.542837   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.543977   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:55.545029   26212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:55.550825    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:55.550825    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:55.593704    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:55.593704    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:45:58.150849    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:45:58.173353    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:45:58.208754    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.208818    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:45:58.212164    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:45:58.243761    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.243761    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:45:58.250955    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:45:58.281367    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.281367    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:45:58.284495    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:45:58.316967    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.316967    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:45:58.320494    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:45:58.348625    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.348625    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:45:58.352160    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:45:58.381869    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.381903    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:45:58.385500    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:45:58.414468    7212 logs.go:282] 0 containers: []
	W1205 06:45:58.414468    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:45:58.414468    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:45:58.414468    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:45:58.477173    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:45:58.477173    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:45:58.510921    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:45:58.510921    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:45:58.588841    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:45:58.578179   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.579030   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.581977   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.583255   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.584598   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:45:58.578179   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.579030   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.581977   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.583255   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:45:58.584598   26363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:45:58.588841    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:45:58.588841    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:45:58.631288    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:45:58.631288    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:01.185827    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:01.211669    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:01.240318    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.240318    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:01.244369    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:01.272954    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.272984    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:01.276875    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:01.304496    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.304496    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:01.308428    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:01.337895    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.337895    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:01.342072    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:01.371342    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.371342    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:01.375396    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:01.405645    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.405645    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:01.409318    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:01.438488    7212 logs.go:282] 0 containers: []
	W1205 06:46:01.438488    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:01.438488    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:01.438488    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:01.501375    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:01.501375    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:01.531923    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:01.531923    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:01.611098    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:01.599379   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.600362   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.603424   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.604236   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.606692   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:01.599379   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.600362   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.603424   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.604236   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:01.606692   26513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:01.611098    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:01.611098    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:01.651778    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:01.651778    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:04.210929    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:04.234235    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:04.266339    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.266339    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:04.270369    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:04.298003    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.298003    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:04.301903    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:04.337407    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.337407    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:04.344300    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:04.372934    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.372934    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:04.376896    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:04.405443    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.405443    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:04.411712    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:04.445219    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.445219    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:04.448803    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:04.477773    7212 logs.go:282] 0 containers: []
	W1205 06:46:04.477773    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:04.477773    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:04.477773    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:04.540878    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:04.540878    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:04.574210    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:04.574255    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:04.661787    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:04.649784   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.650558   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.654016   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.655795   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.657103   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:04.649784   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.650558   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.654016   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.655795   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:04.657103   26663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:04.661787    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:04.661828    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:04.705800    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:04.705800    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:07.260460    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:07.282560    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:07.313615    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.313615    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:07.317917    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:07.349712    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.349712    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:07.356819    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:07.386408    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.386408    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:07.391604    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:07.420438    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.420438    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:07.424140    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:07.462197    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.462237    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:07.465807    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:07.496995    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.497043    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:07.501612    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:07.531112    7212 logs.go:282] 0 containers: []
	W1205 06:46:07.531112    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:07.531112    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:07.531112    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:07.572585    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:07.572585    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:07.640780    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:07.640816    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:07.702867    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:07.702867    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:07.735207    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:07.735207    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:07.815128    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:07.804587   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.805658   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.806988   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.808251   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.809059   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:07.804587   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.805658   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.806988   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.808251   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:07.809059   26826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:10.321242    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:10.347077    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:10.375550    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.375550    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:10.379531    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:10.409415    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.409415    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:10.413063    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:10.440057    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.440091    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:10.443652    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:10.472632    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.472632    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:10.477415    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:10.504835    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.504908    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:10.508498    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:10.536667    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.536667    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:10.540145    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:10.569461    7212 logs.go:282] 0 containers: []
	W1205 06:46:10.569461    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:10.569461    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:10.569461    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:10.623261    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:10.623261    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:10.687563    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:10.688564    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:10.722237    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:10.722237    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:10.805565    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:10.795710   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.796624   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.799048   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.800169   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.801133   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:10.795710   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.796624   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.799048   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.800169   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.801133   26973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:10.805565    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:10.805565    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:13.353377    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:13.376836    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:13.408935    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.408935    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:13.412283    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:13.440589    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.440589    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:13.443942    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:13.471789    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.471789    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:13.475592    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:13.507158    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.507158    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:13.510673    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:13.539005    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.539005    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:13.542972    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:13.571336    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.571336    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:13.575544    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:13.607804    7212 logs.go:282] 0 containers: []
	W1205 06:46:13.607804    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:13.607804    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:13.607804    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:13.659026    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:13.659026    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:13.720978    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:13.720978    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:13.749991    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:13.749991    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:13.834647    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:13.826165   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.826856   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.829290   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.830477   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.831195   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:13.826165   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.826856   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.829290   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.830477   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:13.831195   27124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:13.834647    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:13.834647    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:16.382602    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:16.405050    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:16.434952    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.434952    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:16.438639    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:16.467860    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.467860    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:16.471318    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:16.500902    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.500902    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:16.504304    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:16.532647    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.532693    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:16.536824    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:16.564360    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.564438    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:16.567706    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:16.597119    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.597119    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:16.600476    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:16.629886    7212 logs.go:282] 0 containers: []
	W1205 06:46:16.629911    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:16.629911    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:16.629911    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:16.691374    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:16.691374    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:16.750418    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:16.750418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:16.782159    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:16.782192    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:16.862369    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:16.853414   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.854270   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.856792   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.857865   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.859334   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:16.853414   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.854270   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.856792   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.857865   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:16.859334   27279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:16.862369    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:16.862369    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:19.407336    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:19.430057    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:19.461769    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.461769    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:19.465525    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:19.491836    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.491836    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:19.495761    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:19.525324    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.525356    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:19.528800    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:19.558579    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.558579    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:19.562051    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:19.589412    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.589412    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:19.593146    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:19.622707    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.622707    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:19.625895    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:19.656952    7212 logs.go:282] 0 containers: []
	W1205 06:46:19.656952    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:19.657020    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:19.657020    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:19.720896    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:19.720896    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:19.752072    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:19.752072    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:19.834344    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:19.826323   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.827288   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.828948   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.830277   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.831332   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:19.826323   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.827288   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.828948   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.830277   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.831332   27411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:19.834344    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:19.834344    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:19.878582    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:19.878582    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:22.431710    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:22.454187    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:22.485102    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.485102    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:22.488599    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:22.518242    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.518242    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:22.522246    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:22.551216    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.551216    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:22.556117    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:22.585264    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.585264    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:22.589332    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:22.622681    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.622681    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:22.626416    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:22.655508    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.655508    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:22.658877    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:22.688473    7212 logs.go:282] 0 containers: []
	W1205 06:46:22.688473    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:22.688473    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:22.688473    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:22.731017    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:22.731017    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:22.782707    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:22.782707    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:22.844666    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:22.844666    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:22.874890    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:22.874890    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:22.957293    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:22.945687   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.946408   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.949856   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.951315   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.954701   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:22.945687   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.946408   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.949856   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.951315   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:22.954701   27575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:25.461870    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:25.481732    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:25.508887    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.508912    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:25.512223    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:25.539963    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.539963    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:25.545555    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:25.574038    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.574038    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:25.577835    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:25.609018    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.609018    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:25.612491    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:25.645093    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.645093    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:25.649133    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:25.677460    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.677534    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:25.680896    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:25.708665    7212 logs.go:282] 0 containers: []
	W1205 06:46:25.708665    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:25.708665    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:25.708665    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:25.769723    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:25.769723    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:25.799615    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:25.799615    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:25.879893    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:25.869531   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.871260   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.872313   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.873848   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.875457   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:25.869531   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.871260   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.872313   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.873848   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:25.875457   27710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:25.879893    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:25.880074    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:25.924240    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:25.924240    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:28.483406    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:28.505297    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:28.535489    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.535489    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:28.539098    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:28.567509    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.567509    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:28.571246    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:28.599239    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.599239    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:28.603519    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:28.632376    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.632376    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:28.636087    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:28.666889    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.666889    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:28.670916    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:28.701935    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.701935    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:28.705931    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:28.732451    7212 logs.go:282] 0 containers: []
	W1205 06:46:28.732451    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:28.732451    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:28.732451    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:28.795093    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:28.795093    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:28.825944    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:28.825944    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:28.915238    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:28.901769   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.902657   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.907833   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.908929   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.909853   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:28.901769   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.902657   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.907833   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.908929   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:28.909853   27857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:28.915238    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:28.915238    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:28.957950    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:28.957950    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:31.513277    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.535959    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:31.567005    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.567005    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:31.571004    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:31.603438    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.603438    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:31.607873    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:31.638125    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.638125    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:31.642109    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:31.669836    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.669836    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:31.673433    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:31.700830    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.700830    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:31.704893    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:31.732211    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.732211    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:31.735179    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:31.763781    7212 logs.go:282] 0 containers: []
	W1205 06:46:31.763781    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:31.763781    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:31.763781    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:31.827964    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:31.827964    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:31.859703    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:31.859703    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:31.939017    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:31.927567   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.929616   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.930903   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.932021   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.933084   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:31.927567   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.929616   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.930903   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.932021   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:31.933084   28008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:31.939017    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:31.939017    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:31.980497    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:31.980497    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:34.540855    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.565059    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:34.595703    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.595703    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:34.599913    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:34.629039    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.629039    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:34.635378    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:34.662774    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.662774    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:34.666612    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:34.694999    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.694999    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:34.698155    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:34.730402    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.730432    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:34.734374    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:34.764670    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.764670    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:34.768238    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:34.795402    7212 logs.go:282] 0 containers: []
	W1205 06:46:34.795402    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:34.795402    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:34.795402    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:34.843186    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:34.843186    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:34.902738    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:34.902738    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:34.931865    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:34.931865    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:35.010082    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:34.999424   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.001107   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.004121   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.005242   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.006570   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:34.999424   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.001107   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.004121   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.005242   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:35.006570   28185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:35.010082    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:35.010082    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:37.557583    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.580282    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:37.612599    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.612599    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:37.616539    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:37.647304    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.647304    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:37.650705    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:37.677699    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.677699    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:37.681372    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:37.711536    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.711536    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:37.715342    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:37.743652    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.743728    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:37.747039    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:37.775936    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.775936    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:37.779584    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:37.807810    7212 logs.go:282] 0 containers: []
	W1205 06:46:37.807810    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:37.807810    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:37.807810    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:37.868944    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:37.868944    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:37.900495    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:37.900495    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:37.981033    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:37.968462   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.969463   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.975553   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.976449   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.978700   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:37.968462   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.969463   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.975553   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.976449   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:37.978700   28321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:37.981033    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:37.981033    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:38.029778    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:38.029778    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:40.593265    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.616423    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:40.645880    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.645880    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:40.650072    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:40.679348    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.679348    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:40.682716    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:40.711500    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.711500    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:40.715255    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:40.742163    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.742163    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:40.745881    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:40.773098    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.773098    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:40.776590    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:40.804442    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.804442    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:40.808143    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:40.835322    7212 logs.go:282] 0 containers: []
	W1205 06:46:40.835322    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:40.835322    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:40.835322    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:40.898782    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:40.898782    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:40.929095    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:40.929095    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:41.009700    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:40.997378   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:40.998168   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.003155   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.004388   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.005337   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:40.997378   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:40.998168   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.003155   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.004388   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:41.005337   28473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:41.009700    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:41.009700    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:41.051772    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:41.051772    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:43.609754    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.632554    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:43.662166    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.662166    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:43.665355    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:43.696151    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.696219    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:43.700087    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:43.727564    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.727564    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:43.731288    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:43.758985    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.758985    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:43.762842    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:43.790701    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.790701    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:43.793863    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:43.820625    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.820693    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:43.824094    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:43.851412    7212 logs.go:282] 0 containers: []
	W1205 06:46:43.851412    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:43.851412    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:43.851412    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:43.932012    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:43.923514   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.924816   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.925954   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.927352   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.928369   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:43.923514   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.924816   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.925954   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.927352   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:43.928369   28608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:43.932012    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:43.932012    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:43.973822    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:43.973822    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:44.030002    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:44.030002    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:44.092544    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:44.092544    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:46.629663    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.653580    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:46.683980    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.683980    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:46.687586    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:46.717184    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.717184    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:46.721065    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:46.752185    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.752185    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:46.756108    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:46.784945    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.784945    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:46.789076    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:46.816728    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.816728    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:46.820832    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:46.849937    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.849937    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:46.853438    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:46.881199    7212 logs.go:282] 0 containers: []
	W1205 06:46:46.881199    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:46.881199    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:46.881199    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:46.962790    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:46.954028   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.954924   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.957321   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.958298   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.959408   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:46.954028   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.954924   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.957321   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.958298   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:46.959408   28763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:46.962790    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:46.962790    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:47.007820    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:47.007820    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:47.066959    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:47.066959    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:47.125526    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:47.125526    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:49.660220    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.685156    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:49.717329    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.717329    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:49.721556    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:49.750686    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.750686    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:49.755424    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:49.783846    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.783846    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:49.787710    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:49.815924    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.815924    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:49.819919    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:49.849422    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.849422    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:49.852791    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:49.881693    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.881693    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:49.885723    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:49.911812    7212 logs.go:282] 0 containers: []
	W1205 06:46:49.911897    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:49.911897    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:49.911897    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:49.959749    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:49.959839    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:50.023079    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:50.023079    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:50.052407    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:50.053403    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:50.135599    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:50.126558   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.127490   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.129646   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.130468   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.132768   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:50.126558   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.127490   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.129646   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.130468   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:50.132768   28944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:50.135599    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:50.135599    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:52.683359    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.706979    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:52.736319    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.736342    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:52.739824    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:52.767310    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.767310    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:52.770588    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:52.804418    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.804418    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:52.808338    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:52.836067    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.836133    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:52.840112    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:52.867407    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.867407    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:52.871353    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:52.903797    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.903797    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:52.907366    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:52.937346    7212 logs.go:282] 0 containers: []
	W1205 06:46:52.937346    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:52.937346    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:52.937346    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:52.966187    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:52.966187    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:53.057434    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:53.048926   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050108   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050951   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.053229   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.054407   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:53.048926   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050108   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.050951   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.053229   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:53.054407   29082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:53.057434    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:53.057434    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:53.098631    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:53.098631    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:53.151321    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:53.151321    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:55.719442    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.742352    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:55.776348    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.776348    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:55.780248    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:55.809917    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.809917    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:55.813910    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:55.842184    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.842184    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:55.845526    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:55.873424    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.873424    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:55.877454    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:55.904884    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.904914    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:55.908497    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:55.939112    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.939192    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:55.943140    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:55.972013    7212 logs.go:282] 0 containers: []
	W1205 06:46:55.972013    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:55.972013    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:55.972013    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:56.035906    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:56.035906    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:56.065757    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:56.065757    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:56.150728    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:56.139664   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.141024   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.142888   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.143569   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.145258   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:56.139664   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.141024   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.142888   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.143569   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:56.145258   29236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:56.150728    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:56.150728    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:46:56.191341    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:56.191341    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:58.747043    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.769477    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:46:58.799752    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.799752    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:46:58.803430    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:46:58.834902    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.834902    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:46:58.839294    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:46:58.865557    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.865557    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:46:58.869041    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:46:58.898315    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.898315    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:46:58.902805    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:46:58.930333    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.930333    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:46:58.934379    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:46:58.961514    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.961514    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:46:58.965260    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:46:58.996805    7212 logs.go:282] 0 containers: []
	W1205 06:46:58.996805    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:46:58.996843    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:46:58.996843    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:46:59.046325    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:46:59.046325    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:46:59.108165    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:46:59.108165    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:46:59.139448    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:46:59.139448    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:46:59.221394    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:59.208830   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.211726   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.213247   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.214626   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.215488   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:46:59.208830   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.211726   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.213247   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.214626   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:59.215488   29401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:46:59.221394    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:46:59.221394    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:01.769201    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.791200    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:01.821949    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.821949    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:01.825904    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:01.853210    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.853210    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:01.856535    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:01.884013    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.884013    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:01.887952    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:01.914871    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.914871    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:01.918934    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:01.949236    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.949236    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:01.953139    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:01.981582    7212 logs.go:282] 0 containers: []
	W1205 06:47:01.981582    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:01.985532    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:02.017739    7212 logs.go:282] 0 containers: []
	W1205 06:47:02.017739    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:02.017739    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:02.017739    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:02.080714    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:02.080714    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:02.115578    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:02.116565    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:02.197070    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:02.186132   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.187073   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.189368   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.190575   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.191559   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:02.186132   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.187073   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.189368   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.190575   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:02.191559   29536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:02.197070    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:02.197070    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:02.240876    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:02.240876    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:04.794067    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.821244    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:04.850757    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.850757    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:04.854254    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:04.885802    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.885802    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:04.890179    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:04.921162    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.921162    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:04.927483    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:04.955593    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.955593    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:04.959593    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:04.987937    7212 logs.go:282] 0 containers: []
	W1205 06:47:04.987937    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:04.991470    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:05.021061    7212 logs.go:282] 0 containers: []
	W1205 06:47:05.021061    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:05.025471    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:05.055084    7212 logs.go:282] 0 containers: []
	W1205 06:47:05.055084    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:05.055084    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:05.055084    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:05.096463    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:05.096463    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:05.145562    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:05.145562    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:05.205614    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:05.205614    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:05.236105    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:05.236105    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:05.311644    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:05.300969   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.301790   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.302967   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.304711   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.306061   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:05.300969   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.301790   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.302967   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.304711   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:05.306061   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:07.817415    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.841775    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:07.870798    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.870874    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:07.874275    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:07.904822    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.904822    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:07.909419    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:07.942476    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.942476    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:07.946622    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:07.982402    7212 logs.go:282] 0 containers: []
	W1205 06:47:07.982402    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:07.986368    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:08.018024    7212 logs.go:282] 0 containers: []
	W1205 06:47:08.018055    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:08.021599    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:08.053477    7212 logs.go:282] 0 containers: []
	W1205 06:47:08.053477    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:08.057913    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:08.086906    7212 logs.go:282] 0 containers: []
	W1205 06:47:08.086906    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:08.086906    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:08.086906    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:08.134105    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:08.134105    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:08.199234    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:08.199234    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:08.229538    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:08.229538    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:08.312358    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:08.302222   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.303399   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.304403   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.305532   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.306520   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:08.302222   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.303399   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.304403   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.305532   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:08.306520   29856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:08.312358    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:08.312358    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:10.858986    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.882487    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:10.911485    7212 logs.go:282] 0 containers: []
	W1205 06:47:10.911485    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:10.915831    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:10.942529    7212 logs.go:282] 0 containers: []
	W1205 06:47:10.942529    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:10.946167    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:10.976549    7212 logs.go:282] 0 containers: []
	W1205 06:47:10.976549    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:10.980000    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:11.007377    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.007377    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:11.011696    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:11.040104    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.040154    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:11.043924    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:11.075338    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.075338    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:11.079214    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:11.108253    7212 logs.go:282] 0 containers: []
	W1205 06:47:11.108253    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:11.108283    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:11.108307    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:11.175507    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:11.175507    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:11.205125    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:11.205125    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:11.284350    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:11.274574   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.275635   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.276587   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.277908   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.279094   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:11.274574   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.275635   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.276587   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.277908   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:11.279094   29988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:11.284350    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:11.284350    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:11.326425    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:11.326425    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:13.882929    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.908644    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:13.938949    7212 logs.go:282] 0 containers: []
	W1205 06:47:13.938949    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:13.942723    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:13.972036    7212 logs.go:282] 0 containers: []
	W1205 06:47:13.972036    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:13.975608    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:14.006942    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.006942    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:14.010883    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:14.039783    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.039783    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:14.043702    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:14.074699    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.074699    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:14.081714    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:14.115797    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.115797    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:14.120240    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:14.148949    7212 logs.go:282] 0 containers: []
	W1205 06:47:14.148949    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:14.149031    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:14.149031    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:14.177232    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:14.177256    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:14.253729    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:14.243636   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.244393   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.247381   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.249374   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.250396   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:14.243636   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.244393   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.247381   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.249374   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:14.250396   30135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:14.253729    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:14.253729    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:14.296929    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:14.296929    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:14.345234    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:14.345234    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:16.913879    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.936232    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:16.966712    7212 logs.go:282] 0 containers: []
	W1205 06:47:16.966712    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:16.970413    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:17.000882    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.000882    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:17.004782    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:17.033768    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.033835    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:17.037295    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:17.064692    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.064692    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:17.068384    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:17.094942    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.094942    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:17.099041    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:17.128853    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.128853    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:17.132347    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:17.162220    7212 logs.go:282] 0 containers: []
	W1205 06:47:17.162220    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:17.162302    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:17.162302    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:17.218623    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:17.218623    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:17.279679    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:17.279679    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:17.310820    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:17.310820    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:17.392378    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:17.383714   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.384601   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.387089   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.388284   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.389419   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:17.383714   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.384601   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.387089   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.388284   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:17.389419   30302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:17.392378    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:17.392378    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:19.937296    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.960229    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:19.991535    7212 logs.go:282] 0 containers: []
	W1205 06:47:19.991535    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:19.994703    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:20.027498    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.027498    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:20.031400    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:20.061103    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.061103    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:20.064617    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:20.094571    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.094571    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:20.098564    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:20.126979    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.126979    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:20.130800    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:20.163761    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.163761    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:20.167687    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:20.199132    7212 logs.go:282] 0 containers: []
	W1205 06:47:20.199132    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:20.199132    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:20.199132    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:20.283995    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:20.273544   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.275313   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.276695   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.277723   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.278623   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:20.273544   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.275313   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.276695   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.277723   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:20.278623   30430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:20.283995    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:20.283995    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:20.327148    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:20.327148    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:20.376774    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:20.376833    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:20.440840    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:20.440840    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:22.976319    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.998933    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:23.029032    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.029032    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:23.032581    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:23.063885    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.063913    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:23.067412    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:23.097477    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.097477    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:23.102023    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:23.131128    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.131128    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:23.135559    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:23.163786    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.163786    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:23.166836    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:23.196149    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.196149    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:23.200130    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:23.226149    7212 logs.go:282] 0 containers: []
	W1205 06:47:23.226149    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:23.226149    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:23.226149    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:23.270734    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:23.270734    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:23.321432    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:23.321432    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:23.384463    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:23.384463    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:23.414734    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:23.414734    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:23.498131    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:23.486398   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.487278   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.492370   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.493315   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.495473   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:23.486398   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.487278   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.492370   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.493315   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:23.495473   30600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:26.003605    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.026424    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:26.057455    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.057455    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:26.061184    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:26.089693    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.089693    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:26.093561    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:26.120896    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.120896    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:26.125918    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:26.156135    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.156171    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:26.160046    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:26.190573    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.190652    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:26.194129    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:26.222980    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.222980    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:26.226578    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:26.255995    7212 logs.go:282] 0 containers: []
	W1205 06:47:26.255995    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:26.255995    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:26.255995    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:26.316891    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:26.316891    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:26.344781    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:26.345781    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:26.424418    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:26.415112   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.416239   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.417414   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.418720   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.419921   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:26.415112   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.416239   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.417414   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.418720   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:26.419921   30731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:26.424418    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:26.424418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:26.466578    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:26.466578    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:29.021029    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.042745    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:29.072233    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.072233    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:29.076192    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:29.106021    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.106021    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:29.110492    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:29.142373    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.142436    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:29.145869    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:29.177863    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.177863    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:29.182256    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:29.213617    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.213617    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:29.217234    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:29.248409    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.248409    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:29.251948    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:29.279697    7212 logs.go:282] 0 containers: []
	W1205 06:47:29.279697    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:29.279697    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:29.279697    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:29.306595    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:29.306595    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:29.387588    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:29.376998   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.377931   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.380231   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.381708   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.383241   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:29.376998   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.377931   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.380231   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.381708   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:29.383241   30878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:29.387588    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:29.387588    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:29.432358    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:29.432358    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:29.491687    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:29.491687    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:32.058315    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:32.080902    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:32.112180    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.112180    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:32.115940    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:32.149909    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.149909    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:32.153337    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:32.182212    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.182212    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:32.185857    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:32.214479    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.214479    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:32.218198    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:32.244828    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.244828    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:32.248159    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:32.276613    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.276613    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:32.282850    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:32.312038    7212 logs.go:282] 0 containers: []
	W1205 06:47:32.312038    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:32.312038    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:32.312038    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:32.395073    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:32.382638   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.383368   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.387782   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.388958   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.389569   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:32.382638   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.383368   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.387782   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.388958   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:32.389569   31023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:32.395073    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:32.395073    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:32.438081    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:32.438081    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:32.483065    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:32.483065    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:32.543549    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:32.543549    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:35.082420    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:35.109047    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:35.138903    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.138903    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:35.142559    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:35.169925    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.169925    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:35.176120    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:35.207119    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.207119    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:35.210472    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:35.237822    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.237822    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:35.241605    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:35.269404    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.269404    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:35.272713    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:35.302852    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.302852    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:35.306750    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:35.335749    7212 logs.go:282] 0 containers: []
	W1205 06:47:35.335749    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:35.335749    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:35.335749    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:35.362313    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:35.362313    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:35.447710    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:35.436173   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.437471   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.438375   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.440501   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.441298   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:35.436173   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.437471   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.438375   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.440501   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:35.441298   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:35.447756    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:35.447784    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:35.488801    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:35.488801    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:35.538430    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:35.538430    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:38.105092    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:38.127701    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:38.158329    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.158329    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:38.162322    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:38.190981    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.190981    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:38.194648    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:38.224869    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.224869    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:38.228377    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:38.259328    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.259328    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:38.262581    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:38.290225    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.290225    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:38.293900    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:38.323002    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.323002    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:38.325942    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:38.356122    7212 logs.go:282] 0 containers: []
	W1205 06:47:38.356122    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:38.356158    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:38.356190    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:38.421485    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:38.421485    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:38.451418    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:38.451418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:38.534923    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:38.524924   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.525955   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.526945   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.528136   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.529104   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:38.524924   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.525955   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.526945   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.528136   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:38.529104   31326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:38.534923    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:38.534923    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:38.579182    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:38.579182    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:41.132133    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:41.155916    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:41.190632    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.190671    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:41.194307    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:41.224743    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.224743    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:41.228450    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:41.255924    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.255924    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:41.259608    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:41.287623    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.287623    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:41.291302    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:41.320832    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.320832    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:41.324515    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:41.352503    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.352503    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:41.357486    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:41.384618    7212 logs.go:282] 0 containers: []
	W1205 06:47:41.384618    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:41.384618    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:41.384618    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:41.450555    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:41.450555    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:41.481950    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:41.481950    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:41.556790    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:41.546857   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.547777   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.550205   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.551277   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.552372   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:41.546857   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.547777   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.550205   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.551277   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:41.552372   31479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:41.556790    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:41.556790    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:41.597562    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:41.597562    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:44.157547    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:44.182064    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:44.211702    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.211702    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:44.216365    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:44.244631    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.244631    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:44.248073    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:44.276763    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.276763    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:44.280181    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:44.306409    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.306409    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:44.312584    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:44.340481    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.340481    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:44.344742    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:44.376686    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.376686    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:44.380570    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:44.409366    7212 logs.go:282] 0 containers: []
	W1205 06:47:44.409410    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:44.409410    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:44.409410    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:44.472548    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:44.472548    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:44.503264    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:44.503264    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:44.582552    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:44.572346   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.574184   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.575200   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.578087   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.579345   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:44.572346   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.574184   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.575200   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.578087   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:44.579345   31627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:44.582552    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:44.582552    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:44.624563    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:44.624563    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:47.178449    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:47.200708    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:47.234713    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.234713    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:47.238519    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:47.267129    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.267129    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:47.270852    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:47.300990    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.300990    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:47.304715    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:47.333260    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.333327    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:47.336691    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:47.366566    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.366566    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:47.370142    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:47.398076    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.398076    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:47.401547    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:47.430057    7212 logs.go:282] 0 containers: []
	W1205 06:47:47.430057    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:47.430057    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:47.430109    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:47.474316    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:47.474316    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:47.528972    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:47.529068    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:47.598649    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:47.598649    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:47.629147    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:47.629147    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:47.719619    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:47.707742   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.709680   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.711980   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.714822   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.715445   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:47.707742   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.709680   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.711980   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.714822   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:47.715445   31800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:50.224894    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:50.249386    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:50.280435    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.280435    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:50.283799    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:50.310585    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.310585    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:50.313994    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:50.345240    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.345240    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:50.349156    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:50.377340    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.377340    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:50.381086    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:50.408519    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.408519    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:50.411662    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:50.443298    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.443298    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:50.446970    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:50.475494    7212 logs.go:282] 0 containers: []
	W1205 06:47:50.475494    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:50.475494    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:50.475494    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:50.538866    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:50.538866    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:50.568193    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:50.568193    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:50.646844    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:50.637514   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.638515   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.639306   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.641633   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.642407   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:50.637514   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.638515   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.639306   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.641633   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:50.642407   31925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:50.646844    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:50.646844    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:50.692026    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:50.692026    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:53.247044    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:53.269060    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:53.300023    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.300059    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:53.303477    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:53.332467    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.332546    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:53.337763    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:53.367949    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.367993    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:53.371897    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:53.400010    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.400010    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:53.403505    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:53.434809    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.434809    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:53.438803    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:53.466413    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.466413    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:53.470011    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:53.498721    7212 logs.go:282] 0 containers: []
	W1205 06:47:53.498721    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:53.498721    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:53.498721    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:53.528848    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:53.528848    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:53.607294    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:53.597060   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.599213   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.600195   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.602429   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.603325   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:53.597060   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.599213   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.600195   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.602429   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:53.603325   32072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:53.607294    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:53.607294    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:53.648012    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:53.648012    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:53.700266    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:53.700790    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:56.267783    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:56.289803    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:56.318251    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.318251    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:56.322075    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:56.349027    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.349027    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:56.352735    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:56.379632    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.379632    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:56.384305    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:56.411837    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.411837    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:56.415300    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:56.443062    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.443062    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:56.446823    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:56.475726    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.475726    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:56.479378    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:56.517912    7212 logs.go:282] 0 containers: []
	W1205 06:47:56.517912    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:56.517912    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:56.517912    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:56.596916    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:56.585115   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.586183   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.587141   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.589286   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.592015   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:56.585115   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.586183   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.587141   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.589286   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:56.592015   32219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:56.596916    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:56.596962    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:56.637032    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:56.637032    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:56.684819    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:56.684819    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:56.747303    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:56.747303    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:59.281776    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:59.305247    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:47:59.335407    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.335407    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:59.338881    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:47:59.366851    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.366851    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:47:59.370328    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:47:59.399291    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.399291    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:47:59.402960    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:47:59.432515    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.432515    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:59.436801    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:47:59.467104    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.467104    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:59.470243    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:47:59.497877    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.497941    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:59.501112    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:47:59.529615    7212 logs.go:282] 0 containers: []
	W1205 06:47:59.529697    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:59.529697    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:59.529697    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:59.609983    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:59.598253   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.598877   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.601741   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.603978   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.605591   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:59.598253   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.598877   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.601741   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.603978   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:59.605591   32373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:59.610022    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:47:59.610022    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:47:59.649863    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:47:59.649863    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:59.700479    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:59.700479    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:59.763989    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:59.763989    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:02.300047    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:02.322894    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:02.353230    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.353309    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:02.356900    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:02.385700    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.385700    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:02.388841    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:02.416101    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.416101    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:02.419750    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:02.447464    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.447464    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:02.450777    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:02.480237    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.480237    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:02.483526    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:02.511591    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.511591    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:02.515255    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:02.545284    7212 logs.go:282] 0 containers: []
	W1205 06:48:02.545284    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:02.545284    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:02.545284    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:02.610980    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:02.610980    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:02.642418    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:02.642418    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:02.726956    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:02.717020   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.718016   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.719379   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.720343   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.721493   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:02.717020   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.718016   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.719379   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.720343   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:02.721493   32539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:02.726956    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:02.726956    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:02.771023    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:02.771023    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:05.327683    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:05.351195    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:05.381112    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.381112    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:05.384972    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:05.413259    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.413329    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:05.416730    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:05.445686    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.445686    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:05.449213    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:05.484954    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.484954    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:05.488455    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:05.519190    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.519228    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:05.522884    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:05.554807    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.554807    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:05.558365    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:05.587379    7212 logs.go:282] 0 containers: []
	W1205 06:48:05.587399    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:05.587399    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:05.587425    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:05.641465    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:05.641465    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:05.706506    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:05.706506    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:05.736869    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:05.736941    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:05.824292    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:05.814019   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.816401   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.817445   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.818646   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.819823   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:05.814019   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.816401   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.817445   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.818646   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:05.819823   32706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:05.824292    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:05.824292    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:08.371845    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:08.396050    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:08.433853    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.433853    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:08.437453    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:08.468504    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.468504    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:08.471946    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:08.507492    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.507492    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:08.511033    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:08.541947    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.541947    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:08.545843    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:08.575954    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.575954    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:08.579413    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:08.606879    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.606879    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:08.610759    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:08.640063    7212 logs.go:282] 0 containers: []
	W1205 06:48:08.640063    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:08.640115    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:08.640115    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:08.703340    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:08.703340    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:08.733278    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:08.733278    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:08.818249    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:08.805431   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.806342   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.811394   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.812436   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.813338   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:08.805431   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.806342   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.811394   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.812436   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:08.813338   32846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:08.818249    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:08.818249    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:08.862665    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:08.862665    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:11.417652    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:11.448987    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:11.478110    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.478110    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:11.483009    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:11.508939    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.508939    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:11.515716    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:11.546004    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.546004    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:11.550908    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:11.580644    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.580644    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:11.586014    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:11.614154    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.614154    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:11.618353    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:11.651170    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.651170    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:11.656537    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:11.686019    7212 logs.go:282] 0 containers: []
	W1205 06:48:11.686019    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:11.686019    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:11.687024    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:11.732747    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:11.732747    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:11.793464    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:11.793464    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:11.823414    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:11.823414    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:11.898268    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:11.889270   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.890352   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.891383   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.892797   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.893668   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:11.889270   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.890352   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.891383   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.892797   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:11.893668   33012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:11.898268    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:11.898268    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:14.445893    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:14.474707    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 06:48:14.507067    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.507090    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:14.510610    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 06:48:14.541536    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.541536    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:48:14.544693    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 06:48:14.573562    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.573562    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:48:14.577631    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 06:48:14.611830    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.611830    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:14.615419    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 06:48:14.646076    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.646076    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:14.649650    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 06:48:14.677233    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.677233    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:14.681207    7212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 06:48:14.716473    7212 logs.go:282] 0 containers: []
	W1205 06:48:14.716473    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:14.716473    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:14.716473    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:14.780720    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:14.780720    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:14.810274    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:14.810274    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:14.892394    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:14.882017   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.882944   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.885374   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.887829   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.889201   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:14.882017   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.882944   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.885374   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.887829   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:14.889201   33152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:14.892440    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:48:14.892463    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:48:14.935499    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:48:14.935499    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:17.497000    7212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:17.515654    7212 kubeadm.go:602] duration metric: took 4m3.8265772s to restartPrimaryControlPlane
	W1205 06:48:17.515654    7212 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:48:17.520476    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 06:48:18.188924    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:48:18.211141    7212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:48:18.226163    7212 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:48:18.231371    7212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:48:18.247460    7212 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:48:18.247460    7212 kubeadm.go:158] found existing configuration files:
	
	I1205 06:48:18.251775    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:48:18.267019    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:48:18.270577    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:48:18.291093    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:48:18.304172    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:48:18.307161    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:48:18.323174    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:48:18.334168    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:48:18.338162    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:48:18.354164    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:48:18.366170    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:48:18.369169    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:48:18.385163    7212 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:48:18.520419    7212 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 06:48:18.600326    7212 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:48:18.711687    7212 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:52:19.557610    7212 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:52:19.557683    7212 kubeadm.go:319] 
	I1205 06:52:19.557826    7212 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:52:19.561892    7212 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:52:19.562423    7212 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:52:19.562542    7212 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:52:19.562542    7212 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 06:52:19.562542    7212 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 06:52:19.562542    7212 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 06:52:19.563104    7212 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 06:52:19.563630    7212 kubeadm.go:319] CONFIG_INET: enabled
	I1205 06:52:19.563742    7212 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 06:52:19.563815    7212 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 06:52:19.564032    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 06:52:19.564214    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 06:52:19.564316    7212 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 06:52:19.564458    7212 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 06:52:19.565465    7212 kubeadm.go:319] OS: Linux
	I1205 06:52:19.565539    7212 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:52:19.565664    7212 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:52:19.565817    7212 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:52:19.565879    7212 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:52:19.566004    7212 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:52:19.566103    7212 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:52:19.566193    7212 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:52:19.566291    7212 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:52:19.566380    7212 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:52:19.566467    7212 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:52:19.566467    7212 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:52:19.566467    7212 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:52:19.566467    7212 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:52:19.570411    7212 out.go:252]   - Generating certificates and keys ...
	I1205 06:52:19.570411    7212 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:52:19.571029    7212 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:52:19.571550    7212 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:52:19.571603    7212 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:52:19.571603    7212 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:52:19.572575    7212 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:52:19.572575    7212 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:52:19.575966    7212 out.go:252]   - Booting up control plane ...
	I1205 06:52:19.575966    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:52:19.575966    7212 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:52:19.576966    7212 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:52:19.576966    7212 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001375391s
	I1205 06:52:19.576966    7212 kubeadm.go:319] 
	I1205 06:52:19.576966    7212 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:52:19.576966    7212 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:52:19.576966    7212 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:52:19.576966    7212 kubeadm.go:319] 
	I1205 06:52:19.577967    7212 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:52:19.577967    7212 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:52:19.577967    7212 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:52:19.577967    7212 kubeadm.go:319] 
	W1205 06:52:19.577967    7212 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001375391s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:52:19.583339    7212 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 06:52:20.041041    7212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:52:20.059958    7212 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:52:20.064870    7212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:52:20.077700    7212 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:52:20.077700    7212 kubeadm.go:158] found existing configuration files:
	
	I1205 06:52:20.082397    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:52:20.097746    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:52:20.102900    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:52:20.121456    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:52:20.135442    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:52:20.139595    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:52:20.159529    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:52:20.172924    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:52:20.176919    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:52:20.195400    7212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:52:20.209944    7212 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:52:20.214293    7212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:52:20.235566    7212 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:52:20.355259    7212 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 06:52:20.442209    7212 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:52:20.540382    7212 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:56:21.333777    7212 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 06:56:21.334317    7212 kubeadm.go:319] 
	I1205 06:56:21.334526    7212 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:56:21.342892    7212 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:56:21.342892    7212 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:56:21.342892    7212 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:56:21.342892    7212 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 06:56:21.342892    7212 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 06:56:21.342892    7212 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 06:56:21.342892    7212 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 06:56:21.343889    7212 kubeadm.go:319] CONFIG_INET: enabled
	I1205 06:56:21.344426    7212 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 06:56:21.344579    7212 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 06:56:21.345102    7212 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 06:56:21.345164    7212 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 06:56:21.345788    7212 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 06:56:21.345788    7212 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 06:56:21.345946    7212 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 06:56:21.346019    7212 kubeadm.go:319] OS: Linux
	I1205 06:56:21.346100    7212 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:56:21.346100    7212 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:56:21.346199    7212 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:56:21.346284    7212 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:56:21.346368    7212 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:56:21.346451    7212 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:56:21.346535    7212 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:56:21.346682    7212 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:56:21.346682    7212 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:56:21.346843    7212 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:56:21.347086    7212 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:56:21.347253    7212 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:56:21.347408    7212 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:56:21.350418    7212 out.go:252]   - Generating certificates and keys ...
	I1205 06:56:21.350418    7212 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:56:21.350418    7212 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:56:21.350952    7212 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:56:21.351041    7212 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:56:21.351645    7212 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:56:21.351645    7212 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:56:21.351645    7212 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:56:21.351645    7212 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:56:21.353617    7212 out.go:252]   - Booting up control plane ...
	I1205 06:56:21.353617    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:56:21.354622    7212 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:56:21.355622    7212 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:56:21.355622    7212 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:56:21.355622    7212 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000747056s
	I1205 06:56:21.355622    7212 kubeadm.go:319] 
	I1205 06:56:21.355622    7212 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:56:21.355622    7212 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:56:21.355622    7212 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:56:21.355622    7212 kubeadm.go:319] 
	I1205 06:56:21.355622    7212 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:56:21.355622    7212 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:56:21.356621    7212 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:56:21.356621    7212 kubeadm.go:319] 
	I1205 06:56:21.356621    7212 kubeadm.go:403] duration metric: took 12m7.7172113s to StartCluster
	I1205 06:56:21.356621    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:56:21.360622    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:56:21.601792    7212 cri.go:89] found id: ""
	I1205 06:56:21.601830    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.601858    7212 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:56:21.601858    7212 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:56:21.606583    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:56:21.653730    7212 cri.go:89] found id: ""
	I1205 06:56:21.653730    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.653730    7212 logs.go:284] No container was found matching "etcd"
	I1205 06:56:21.653730    7212 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:56:21.658389    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:56:21.703398    7212 cri.go:89] found id: ""
	I1205 06:56:21.703398    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.703398    7212 logs.go:284] No container was found matching "coredns"
	I1205 06:56:21.703398    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:56:21.707890    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:56:21.747639    7212 cri.go:89] found id: ""
	I1205 06:56:21.747639    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.747639    7212 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:56:21.747639    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:56:21.752626    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:56:21.800627    7212 cri.go:89] found id: ""
	I1205 06:56:21.800627    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.800627    7212 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:56:21.800627    7212 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:56:21.805173    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:56:21.844454    7212 cri.go:89] found id: ""
	I1205 06:56:21.844454    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.844454    7212 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:56:21.844454    7212 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:56:21.848782    7212 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:56:21.891771    7212 cri.go:89] found id: ""
	I1205 06:56:21.891771    7212 logs.go:282] 0 containers: []
	W1205 06:56:21.891771    7212 logs.go:284] No container was found matching "kindnet"
	I1205 06:56:21.891771    7212 logs.go:123] Gathering logs for kubelet ...
	I1205 06:56:21.891771    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:56:21.969778    7212 logs.go:123] Gathering logs for dmesg ...
	I1205 06:56:21.969778    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:56:22.005948    7212 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:56:22.005948    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:56:22.265248    7212 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:56:22.255844   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.256835   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259037   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259983   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.260673   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:56:22.255844   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.256835   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259037   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.259983   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:56:22.260673   41168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:56:22.265248    7212 logs.go:123] Gathering logs for Docker ...
	I1205 06:56:22.265248    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 06:56:22.308852    7212 logs.go:123] Gathering logs for container status ...
	I1205 06:56:22.308852    7212 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 06:56:22.367035    7212 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:56:22.367035    7212 out.go:285] * 
	W1205 06:56:22.367247    7212 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:56:22.367617    7212 out.go:285] * 
	W1205 06:56:22.369745    7212 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:56:22.374297    7212 out.go:203] 
	W1205 06:56:22.378243    7212 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000747056s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:56:22.378410    7212 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:56:22.378410    7212 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:56:22.381512    7212 out.go:203] 
	
	
	==> Docker <==
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406062974Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406068774Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406091077Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406121880Z" level=info msg="Initializing buildkit"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.521727722Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529404028Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529609450Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529612750Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529693058Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:44:10 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:10 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:44:11 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:44:11 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:57:18.293101   42330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:18.294371   42330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:18.298407   42330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:18.299785   42330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:18.300723   42330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:44] CPU: 6 PID: 67767 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000825] RIP: 0033:0x7f9683d26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7f9683d26af6.
	[  +0.000653] RSP: 002b:00007ffedb1b9ba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000774] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000786] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000895] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000804] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000818] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000794] FS:  0000000000000000 GS:  0000000000000000
	[  +0.946792] CPU: 8 PID: 67891 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f0ceb5efb20
	[  +0.000393] Code: Unable to access opcode bytes at RIP 0x7f0ceb5efaf6.
	[  +0.000679] RSP: 002b:00007fff219f5bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000791] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000868] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001135] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001172] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001044] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:57:18 up  2:31,  0 user,  load average: 0.19, 0.28, 0.41
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:57:15 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:15 functional-247800 kubelet[42169]: E1205 06:57:15.416900   42169 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:15 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:15 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:16 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 393.
	Dec 05 06:57:16 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:16 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:16 functional-247800 kubelet[42181]: E1205 06:57:16.158039   42181 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:16 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:16 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:16 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 394.
	Dec 05 06:57:16 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:16 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:16 functional-247800 kubelet[42193]: E1205 06:57:16.925998   42193 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:16 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:16 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:17 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 395.
	Dec 05 06:57:17 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:17 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:17 functional-247800 kubelet[42221]: E1205 06:57:17.690295   42221 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:17 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:17 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:18 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 396.
	Dec 05 06:57:18 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:18 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (628.5145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (53.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-247800 apply -f testdata\invalidsvc.yaml
E1205 06:57:23.917803    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-247800 apply -f testdata\invalidsvc.yaml: exit status 1 (20.2096824s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:55398/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-247800 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (4.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 status: exit status 2 (591.828ms)

                                                
                                                
-- stdout --
	functional-247800
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-247800 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (623.8107ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-247800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 status -o json: exit status 2 (641.4418ms)

                                                
                                                
-- stdout --
	{"Name":"functional-247800","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-247800 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (645.7503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.0315903s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                                 ARGS                                                                                                  │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache     │ functional-247800 cache reload                                                                                                                                                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh       │ functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                                                               │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache     │ delete registry.k8s.io/pause:3.1                                                                                                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cache     │ delete registry.k8s.io/pause:latest                                                                                                                                                                   │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ kubectl   │ functional-247800 kubectl -- --context functional-247800 get pods                                                                                                                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ start     │ -p functional-247800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                                                              │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ config    │ functional-247800 config unset cpus                                                                                                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ cp        │ functional-247800 cp testdata\cp-test.txt /home/docker/cp-test.txt                                                                                                                                    │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ service   │ functional-247800 service list                                                                                                                                                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ config    │ functional-247800 config get cpus                                                                                                                                                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ config    │ functional-247800 config get cpus                                                                                                                                                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ ssh       │ functional-247800 ssh -n functional-247800 sudo cat /home/docker/cp-test.txt                                                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ config    │ functional-247800 config unset cpus                                                                                                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ config    │ functional-247800 config get cpus                                                                                                                                                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ service   │ functional-247800 service --namespace=default --https --url hello-node                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ cp        │ functional-247800 cp functional-247800:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3149493486\001\cp-test.txt │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ service   │ functional-247800 service hello-node --url --format={{.IP}}                                                                                                                                           │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ ssh       │ functional-247800 ssh -n functional-247800 sudo cat /home/docker/cp-test.txt                                                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ service   │ functional-247800 service hello-node --url                                                                                                                                                            │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ cp        │ functional-247800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ ssh       │ functional-247800 ssh -n functional-247800 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ start     │ -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ start     │ -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ start     │ -p functional-247800 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-247800 --alsologtostderr -v=1                                                                                                                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:57:47
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:57:47.919380   11688 out.go:360] Setting OutFile to fd 1156 ...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.966846   11688 out.go:374] Setting ErrFile to fd 1092...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.983235   11688 out.go:368] Setting JSON to false
	I1205 06:57:47.988709   11688 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9125,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:57:47.988802   11688 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:57:47.993271   11688 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:57:47.996919   11688 notify.go:221] Checking for updates...
	I1205 06:57:47.999505   11688 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:57:48.001142   11688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:48.004184   11688 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:57:48.006193   11688 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:48.008186   11688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:48.011391   11688 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:57:48.012582   11688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:48.134800   11688 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:57:48.139641   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.393757   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.364604004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.398140   11688 out.go:179] * Using the docker driver based on existing profile
	I1205 06:57:48.401139   11688 start.go:309] selected driver: docker
	I1205 06:57:48.401139   11688 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.401139   11688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:48.408135   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.639564   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.617542351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.676326   11688 cni.go:84] Creating CNI manager for ""
	I1205 06:57:48.676326   11688 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:57:48.676326   11688 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.680324   11688 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406062974Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406068774Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406091077Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406121880Z" level=info msg="Initializing buildkit"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.521727722Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529404028Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529609450Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529612750Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529693058Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:44:10 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:10 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:44:11 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:44:11 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:57:49.709622   43418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:49.711037   43418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:49.712302   43418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:49.714776   43418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:57:49.717003   43418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:44] CPU: 6 PID: 67767 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000825] RIP: 0033:0x7f9683d26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7f9683d26af6.
	[  +0.000653] RSP: 002b:00007ffedb1b9ba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000774] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000786] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000895] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000804] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000818] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000794] FS:  0000000000000000 GS:  0000000000000000
	[  +0.946792] CPU: 8 PID: 67891 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f0ceb5efb20
	[  +0.000393] Code: Unable to access opcode bytes at RIP 0x7f0ceb5efaf6.
	[  +0.000679] RSP: 002b:00007fff219f5bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000791] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000868] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001135] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001172] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001044] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:57:49 up  2:31,  0 user,  load average: 0.25, 0.28, 0.41
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:57:46 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:46 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 434.
	Dec 05 06:57:46 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:46 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:46 functional-247800 kubelet[43222]: E1205 06:57:46.947232   43222 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:46 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:46 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:47 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 435.
	Dec 05 06:57:47 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:47 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:47 functional-247800 kubelet[43249]: E1205 06:57:47.678086   43249 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:47 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:47 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:48 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 436.
	Dec 05 06:57:48 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:48 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:48 functional-247800 kubelet[43277]: E1205 06:57:48.440389   43277 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:48 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:48 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:57:49 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 437.
	Dec 05 06:57:49 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:49 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:57:49 functional-247800 kubelet[43305]: E1205 06:57:49.184016   43305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:57:49 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:57:49 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (597.9418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (4.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (122.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-247800 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-247800 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (97.8012ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:55398/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-247800 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-247800 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-247800 describe po hello-node-connect: exit status 1 (50.3220803s)

                                                
                                                
** stderr ** 
	E1205 06:58:16.984637    4272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:27.068393    4272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:37.103611    4272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:47.139644    4272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:57.179973    4272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-247800 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-247800 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-247800 logs -l app=hello-node-connect: exit status 1 (40.2924111s)

                                                
                                                
** stderr ** 
	E1205 06:59:07.315872    9972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:59:17.399595    9972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:59:27.436852    9972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:59:37.477119    9972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-247800 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-247800 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-247800 describe svc hello-node-connect: exit status 1 (29.3794919s)

                                                
                                                
** stderr ** 
	E1205 06:59:47.615642   12844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:59:57.712733   12844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-247800 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (586.5739ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.0072743s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-247800 addons list                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ addons         │ functional-247800 addons list -o json                                                                  │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh echo hello                                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh cat /etc/hostname                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/8036.pem                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /usr/share/ca-certificates/8036.pem                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/80362.pem                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /usr/share/ca-certificates/80362.pem                                    │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ docker-env     │ functional-247800 docker-env                                                                           │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ tunnel         │ functional-247800 tunnel --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel         │ functional-247800 tunnel --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel         │ functional-247800 tunnel --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ ssh            │ functional-247800 ssh sudo cat /etc/test/nested/copy/8036/hosts                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format short --alsologtostderr                                            │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format yaml --alsologtostderr                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ ssh            │ functional-247800 ssh pgrep buildkitd                                                                  │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │                     │
	│ image          │ functional-247800 image build -t localhost/my-image:functional-247800 testdata\build --alsologtostderr │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format json --alsologtostderr                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format table --alsologtostderr                                            │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ update-context │ functional-247800 update-context --alsologtostderr -v=2                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ update-context │ functional-247800 update-context --alsologtostderr -v=2                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ update-context │ functional-247800 update-context --alsologtostderr -v=2                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:57:47
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:57:47.919380   11688 out.go:360] Setting OutFile to fd 1156 ...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.966846   11688 out.go:374] Setting ErrFile to fd 1092...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.983235   11688 out.go:368] Setting JSON to false
	I1205 06:57:47.988709   11688 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9125,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:57:47.988802   11688 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:57:47.993271   11688 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:57:47.996919   11688 notify.go:221] Checking for updates...
	I1205 06:57:47.999505   11688 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:57:48.001142   11688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:48.004184   11688 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:57:48.006193   11688 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:48.008186   11688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:48.011391   11688 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:57:48.012582   11688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:48.134800   11688 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:57:48.139641   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.393757   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.364604004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.398140   11688 out.go:179] * Using the docker driver based on existing profile
	I1205 06:57:48.401139   11688 start.go:309] selected driver: docker
	I1205 06:57:48.401139   11688 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.401139   11688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:48.408135   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.639564   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.617542351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.676326   11688 cni.go:84] Creating CNI manager for ""
	I1205 06:57:48.676326   11688 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:57:48.676326   11688 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.680324   11688 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406068774Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406091077Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406121880Z" level=info msg="Initializing buildkit"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.521727722Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529404028Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529609450Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529612750Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529693058Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:44:10 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:10 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:44:11 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:44:11 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:59:43 functional-247800 dockerd[22190]: time="2025-12-05T06:59:43.588476660Z" level=info msg="sbJoin: gwep4 ''->'aed492c95293', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:00:08.404676   47180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:08.405982   47180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:08.406860   47180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:08.409099   47180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:08.410214   47180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:44] CPU: 6 PID: 67767 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000825] RIP: 0033:0x7f9683d26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7f9683d26af6.
	[  +0.000653] RSP: 002b:00007ffedb1b9ba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000774] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000786] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000895] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000804] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000818] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000794] FS:  0000000000000000 GS:  0000000000000000
	[  +0.946792] CPU: 8 PID: 67891 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f0ceb5efb20
	[  +0.000393] Code: Unable to access opcode bytes at RIP 0x7f0ceb5efaf6.
	[  +0.000679] RSP: 002b:00007fff219f5bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000791] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000868] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001135] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001172] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001044] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:00:08 up  2:33,  0 user,  load average: 0.58, 0.41, 0.44
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:00:04 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:05 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 619.
	Dec 05 07:00:05 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:05 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:05 functional-247800 kubelet[47025]: E1205 07:00:05.642790   47025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:05 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:05 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:06 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 620.
	Dec 05 07:00:06 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:06 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:06 functional-247800 kubelet[47037]: E1205 07:00:06.401122   47037 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:06 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:06 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:07 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 621.
	Dec 05 07:00:07 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:07 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:07 functional-247800 kubelet[47048]: E1205 07:00:07.154544   47048 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:07 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:07 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:07 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 622.
	Dec 05 07:00:07 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:07 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:07 functional-247800 kubelet[47076]: E1205 07:00:07.905219   47076 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:07 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:07 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (591.4485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (122.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (243.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
E1205 07:00:22.627546    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:55398/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (674.6668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (612.5249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.4114611s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-247800 addons list                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ addons         │ functional-247800 addons list -o json                                                                  │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh echo hello                                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh cat /etc/hostname                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/8036.pem                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /usr/share/ca-certificates/8036.pem                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/80362.pem                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /usr/share/ca-certificates/80362.pem                                    │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh            │ functional-247800 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ docker-env     │ functional-247800 docker-env                                                                           │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ tunnel         │ functional-247800 tunnel --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel         │ functional-247800 tunnel --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel         │ functional-247800 tunnel --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ ssh            │ functional-247800 ssh sudo cat /etc/test/nested/copy/8036/hosts                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format short --alsologtostderr                                            │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format yaml --alsologtostderr                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ ssh            │ functional-247800 ssh pgrep buildkitd                                                                  │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │                     │
	│ image          │ functional-247800 image build -t localhost/my-image:functional-247800 testdata\build --alsologtostderr │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format json --alsologtostderr                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ image          │ functional-247800 image ls --format table --alsologtostderr                                            │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ update-context │ functional-247800 update-context --alsologtostderr -v=2                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ update-context │ functional-247800 update-context --alsologtostderr -v=2                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	│ update-context │ functional-247800 update-context --alsologtostderr -v=2                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:57:47
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:57:47.919380   11688 out.go:360] Setting OutFile to fd 1156 ...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.966846   11688 out.go:374] Setting ErrFile to fd 1092...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.983235   11688 out.go:368] Setting JSON to false
	I1205 06:57:47.988709   11688 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9125,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:57:47.988802   11688 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:57:47.993271   11688 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:57:47.996919   11688 notify.go:221] Checking for updates...
	I1205 06:57:47.999505   11688 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:57:48.001142   11688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:48.004184   11688 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:57:48.006193   11688 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:48.008186   11688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:48.011391   11688 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:57:48.012582   11688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:48.134800   11688 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:57:48.139641   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.393757   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.364604004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.398140   11688 out.go:179] * Using the docker driver based on existing profile
	I1205 06:57:48.401139   11688 start.go:309] selected driver: docker
	I1205 06:57:48.401139   11688 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.401139   11688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:48.408135   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.639564   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.617542351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.676326   11688 cni.go:84] Creating CNI manager for ""
	I1205 06:57:48.676326   11688 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:57:48.676326   11688 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.680324   11688 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406068774Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406091077Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406121880Z" level=info msg="Initializing buildkit"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.521727722Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529404028Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529609450Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529612750Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529693058Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:44:10 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:10 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:44:11 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:44:11 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:59:43 functional-247800 dockerd[22190]: time="2025-12-05T06:59:43.588476660Z" level=info msg="sbJoin: gwep4 ''->'aed492c95293', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:01:45.502531   48877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:01:45.503642   48877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:01:45.504967   48877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:01:45.506199   48877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:01:45.507559   48877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:44] CPU: 6 PID: 67767 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000825] RIP: 0033:0x7f9683d26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7f9683d26af6.
	[  +0.000653] RSP: 002b:00007ffedb1b9ba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000774] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000786] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000895] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000804] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000818] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000794] FS:  0000000000000000 GS:  0000000000000000
	[  +0.946792] CPU: 8 PID: 67891 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f0ceb5efb20
	[  +0.000393] Code: Unable to access opcode bytes at RIP 0x7f0ceb5efaf6.
	[  +0.000679] RSP: 002b:00007fff219f5bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000791] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000868] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001135] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001172] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001044] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:01:45 up  2:35,  0 user,  load average: 0.25, 0.37, 0.43
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:01:42 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:01:43 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 749.
	Dec 05 07:01:43 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:43 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:43 functional-247800 kubelet[48709]: E1205 07:01:43.154724   48709 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:01:43 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:01:43 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:01:43 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 750.
	Dec 05 07:01:43 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:43 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:43 functional-247800 kubelet[48737]: E1205 07:01:43.905102   48737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:01:43 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:01:43 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:01:44 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 751.
	Dec 05 07:01:44 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:44 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:44 functional-247800 kubelet[48765]: E1205 07:01:44.647754   48765 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:01:44 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:01:44 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:01:45 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 752.
	Dec 05 07:01:45 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:45 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:01:45 functional-247800 kubelet[48882]: E1205 07:01:45.388357   48882 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:01:45 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:01:45 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (592.0608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (243.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-247800 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-247800 replace --force -f testdata\mysql.yaml: exit status 1 (20.2170485s)

                                                
                                                
** stderr ** 
	E1205 06:59:24.314412    3172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:59:34.398125    3172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:55398/api?timeout=32s": EOF
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:55398/api?timeout=32s": EOF

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-247800 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (572.8565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image      │ functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ image      │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ image      │ functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:58 UTC │
	│ image      │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image      │ functional-247800 image save kicbase/echo-server:functional-247800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image      │ functional-247800 image rm kicbase/echo-server:functional-247800 --alsologtostderr                                                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image      │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image      │ functional-247800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image      │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image      │ functional-247800 image save --daemon kicbase/echo-server:functional-247800 --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ addons     │ functional-247800 addons list                                                                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ addons     │ functional-247800 addons list -o json                                                                                                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh echo hello                                                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh cat /etc/hostname                                                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh sudo cat /etc/ssl/certs/8036.pem                                                                                                    │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh sudo cat /usr/share/ca-certificates/8036.pem                                                                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh sudo cat /etc/ssl/certs/80362.pem                                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh sudo cat /usr/share/ca-certificates/80362.pem                                                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh        │ functional-247800 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ docker-env │ functional-247800 docker-env                                                                                                                              │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ tunnel     │ functional-247800 tunnel --alsologtostderr                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel     │ functional-247800 tunnel --alsologtostderr                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel     │ functional-247800 tunnel --alsologtostderr                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ ssh        │ functional-247800 ssh sudo cat /etc/test/nested/copy/8036/hosts                                                                                           │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:57:47
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:57:47.919380   11688 out.go:360] Setting OutFile to fd 1156 ...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.966846   11688 out.go:374] Setting ErrFile to fd 1092...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.983235   11688 out.go:368] Setting JSON to false
	I1205 06:57:47.988709   11688 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9125,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:57:47.988802   11688 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:57:47.993271   11688 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:57:47.996919   11688 notify.go:221] Checking for updates...
	I1205 06:57:47.999505   11688 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:57:48.001142   11688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:48.004184   11688 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:57:48.006193   11688 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:48.008186   11688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:48.011391   11688 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:57:48.012582   11688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:48.134800   11688 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:57:48.139641   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.393757   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.364604004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.398140   11688 out.go:179] * Using the docker driver based on existing profile
	I1205 06:57:48.401139   11688 start.go:309] selected driver: docker
	I1205 06:57:48.401139   11688 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.401139   11688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:48.408135   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.639564   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.617542351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.676326   11688 cni.go:84] Creating CNI manager for ""
	I1205 06:57:48.676326   11688 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:57:48.676326   11688 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.680324   11688 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406062974Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406068774Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406091077Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406121880Z" level=info msg="Initializing buildkit"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.521727722Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529404028Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529609450Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529612750Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529693058Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:44:10 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:10 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:44:11 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:44:11 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:59:35.926719   46033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:59:35.927846   46033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:59:35.931241   46033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:59:35.931924   46033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:59:35.934384   46033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:44] CPU: 6 PID: 67767 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000825] RIP: 0033:0x7f9683d26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7f9683d26af6.
	[  +0.000653] RSP: 002b:00007ffedb1b9ba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000774] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000786] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000895] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000804] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000818] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000794] FS:  0000000000000000 GS:  0000000000000000
	[  +0.946792] CPU: 8 PID: 67891 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f0ceb5efb20
	[  +0.000393] Code: Unable to access opcode bytes at RIP 0x7f0ceb5efaf6.
	[  +0.000679] RSP: 002b:00007fff219f5bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000791] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000868] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001135] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001172] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001044] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:59:35 up  2:33,  0 user,  load average: 0.44, 0.36, 0.43
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:59:32 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:59:33 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 576.
	Dec 05 06:59:33 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:33 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:33 functional-247800 kubelet[45878]: E1205 06:59:33.401083   45878 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:59:33 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:59:33 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:59:34 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 577.
	Dec 05 06:59:34 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:34 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:34 functional-247800 kubelet[45889]: E1205 06:59:34.141700   45889 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:59:34 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:59:34 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:59:34 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 578.
	Dec 05 06:59:34 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:34 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:34 functional-247800 kubelet[45902]: E1205 06:59:34.911468   45902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:59:34 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:59:34 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:59:35 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 579.
	Dec 05 06:59:35 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:35 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:59:35 functional-247800 kubelet[45945]: E1205 06:59:35.664343   45945 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:59:35 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:59:35 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (582.7709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-247800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-247800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (50.347599s)

                                                
                                                
** stderr ** 
	E1205 06:58:01.776892    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:11.866761    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:21.909292    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:31.947581    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:41.982848    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-247800 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1205 06:58:01.776892    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:11.866761    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:21.909292    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:31.947581    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:41.982848    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1205 06:58:01.776892    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:11.866761    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:21.909292    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:31.947581    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:41.982848    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1205 06:58:01.776892    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:11.866761    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:21.909292    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:31.947581    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:41.982848    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1205 06:58:01.776892    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:11.866761    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:21.909292    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:31.947581    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:41.982848    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1205 06:58:01.776892    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:11.866761    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:21.909292    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:31.947581    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	E1205 06:58:41.982848    3364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:55398/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-247800
helpers_test.go:243: (dbg) docker inspect functional-247800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc",
	        "Created": "2025-12-05T06:26:07.179836347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:26:07.445996819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/hosts",
	        "LogPath": "/var/lib/docker/containers/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc/b5c787fb2368f3a45222cbed271b35a29138e708420fa4536ea9dd3dc2d3f8dc-json.log",
	        "Name": "/functional-247800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-247800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-247800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d24d17856560f9ce7f5ed7cd985d6aa6ef78ae0e585cf02c45b2349370a7160/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-247800",
	                "Source": "/var/lib/docker/volumes/functional-247800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-247800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-247800",
	                "name.minikube.sigs.k8s.io": "functional-247800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86a6c6979a3d01d57b7a97e50c2f466331605a0803bc0b565360ecac302c58e0",
	            "SandboxKey": "/var/run/docker/netns/86a6c6979a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55394"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55397"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-247800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8951bfa50cf5aa11aa525e417cc57196fc3dfe87f30feb8c2886ba0dce94c862",
	                    "EndpointID": "7fa37e644dafe936e173981b5080162bfb15bb4d39b3a03b0df937e6b994755b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-247800",
	                        "b5c787fb2368"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-247800 -n functional-247800: exit status 2 (612.0366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs -n 25: (1.0192983s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-247800 service hello-node --url                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ cp        │ functional-247800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ ssh       │ functional-247800 ssh -n functional-247800 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ start     │ -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ start     │ -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ start     │ -p functional-247800 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                 │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-247800 --alsologtostderr -v=1                                                                                            │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ ssh       │ functional-247800 ssh sudo systemctl is-active crio                                                                                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ image     │ functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ image     │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ image     │ functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ image     │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ image     │ functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:58 UTC │
	│ image     │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image     │ functional-247800 image save kicbase/echo-server:functional-247800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image     │ functional-247800 image rm kicbase/echo-server:functional-247800 --alsologtostderr                                                                        │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image     │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image     │ functional-247800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image     │ functional-247800 image ls                                                                                                                                │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ image     │ functional-247800 image save --daemon kicbase/echo-server:functional-247800 --alsologtostderr                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ addons    │ functional-247800 addons list                                                                                                                             │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ addons    │ functional-247800 addons list -o json                                                                                                                     │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh       │ functional-247800 ssh echo hello                                                                                                                          │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh       │ functional-247800 ssh cat /etc/hostname                                                                                                                   │ functional-247800 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:57:47
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:57:47.919380   11688 out.go:360] Setting OutFile to fd 1156 ...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.966846   11688 out.go:374] Setting ErrFile to fd 1092...
	I1205 06:57:47.966846   11688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.983235   11688 out.go:368] Setting JSON to false
	I1205 06:57:47.988709   11688 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9125,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:57:47.988802   11688 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:57:47.993271   11688 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:57:47.996919   11688 notify.go:221] Checking for updates...
	I1205 06:57:47.999505   11688 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:57:48.001142   11688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:48.004184   11688 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:57:48.006193   11688 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:48.008186   11688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:48.011391   11688 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:57:48.012582   11688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:48.134800   11688 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:57:48.139641   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.393757   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.364604004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.398140   11688 out.go:179] * Using the docker driver based on existing profile
	I1205 06:57:48.401139   11688 start.go:309] selected driver: docker
	I1205 06:57:48.401139   11688 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.401139   11688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:48.408135   11688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:48.639564   11688 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:48.617542351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:48.676326   11688 cni.go:84] Creating CNI manager for ""
	I1205 06:57:48.676326   11688 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:57:48.676326   11688 start.go:353] cluster config:
	{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:48.680324   11688 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406062974Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406068774Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406091077Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.406121880Z" level=info msg="Initializing buildkit"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.521727722Z" level=info msg="Completed buildkit initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529404028Z" level=info msg="Daemon has completed initialization"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529609450Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529612750Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 06:44:10 functional-247800 dockerd[22190]: time="2025-12-05T06:44:10.529693058Z" level=info msg="API listen on [::]:2376"
	Dec 05 06:44:10 functional-247800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:10 functional-247800 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 05 06:44:10 functional-247800 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 05 06:44:11 functional-247800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Loaded network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 06:44:11 functional-247800 cri-dockerd[22509]: time="2025-12-05T06:44:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 06:44:11 functional-247800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:58:43.566320   44757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:43.567599   44757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:43.568791   44757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:43.570105   44757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:43.571470   44757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000763] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000916] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001056] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001235] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000934] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 06:44] CPU: 6 PID: 67767 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000825] RIP: 0033:0x7f9683d26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7f9683d26af6.
	[  +0.000653] RSP: 002b:00007ffedb1b9ba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000774] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000786] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000895] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000804] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000818] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000794] FS:  0000000000000000 GS:  0000000000000000
	[  +0.946792] CPU: 8 PID: 67891 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f0ceb5efb20
	[  +0.000393] Code: Unable to access opcode bytes at RIP 0x7f0ceb5efaf6.
	[  +0.000679] RSP: 002b:00007fff219f5bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000791] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000868] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001135] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001172] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001044] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:58:43 up  2:32,  0 user,  load average: 0.49, 0.34, 0.42
	Linux functional-247800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:58:40 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:40 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 506.
	Dec 05 06:58:40 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:40 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:40 functional-247800 kubelet[44596]: E1205 06:58:40.907179   44596 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:40 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:40 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:41 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 507.
	Dec 05 06:58:41 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:41 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:41 functional-247800 kubelet[44608]: E1205 06:58:41.645834   44608 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:41 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:41 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:42 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 508.
	Dec 05 06:58:42 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:42 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:42 functional-247800 kubelet[44620]: E1205 06:58:42.417632   44620 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:42 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:42 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:43 functional-247800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 509.
	Dec 05 06:58:43 functional-247800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:43 functional-247800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:43 functional-247800 kubelet[44649]: E1205 06:58:43.175294   44649 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:43 functional-247800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:43 functional-247800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-247800 -n functional-247800: exit status 2 (595.5892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-247800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-247800 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-247800 create deployment hello-node --image kicbase/echo-server: exit status 1 (108.4876ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:55398/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-247800 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 service list: exit status 103 (498.1457ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-247800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-247800"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-247800 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-247800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-247800\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 service list -o json: exit status 103 (519.2149ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-247800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-247800"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-247800 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 service --namespace=default --https --url hello-node: exit status 103 (526.6991ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-247800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-247800"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-247800 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 service hello-node --url --format={{.IP}}: exit status 103 (521.4489ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-247800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-247800"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-247800 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-247800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-247800\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 service hello-node --url: exit status 103 (481.8188ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-247800 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-247800"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-247800 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-247800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-247800"
functional_test.go:1579: failed to parse "* The control-plane node functional-247800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-247800\"": parse "* The control-plane node functional-247800 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-247800\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-247800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-247800"
functional_test.go:514: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-247800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-247800": exit status 1 (2.8946388s)

                                                
                                                
-- stdout --
	functional-247800
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1205 06:58:52.973277   11312 out.go:360] Setting OutFile to fd 1352 ...
I1205 06:58:53.045753   11312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:58:53.045753   11312 out.go:374] Setting ErrFile to fd 1240...
I1205 06:58:53.045753   11312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:58:53.059752   11312 mustload.go:66] Loading cluster: functional-247800
I1205 06:58:53.060752   11312 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:58:53.068750   11312 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
I1205 06:58:53.118751   11312 host.go:66] Checking if "functional-247800" exists ...
I1205 06:58:53.123753   11312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-247800
I1205 06:58:53.180143   11312 api_server.go:166] Checking apiserver status ...
I1205 06:58:53.184707   11312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 06:58:53.188530   11312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
I1205 06:58:53.243932   11312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
W1205 06:58:53.388967   11312 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1205 06:58:53.398780   11312 out.go:179] * The control-plane node functional-247800 apiserver is not running: (state=Stopped)
I1205 06:58:53.402205   11312 out.go:179]   To start a cluster, run: "minikube start -p functional-247800"

                                                
                                                
stdout: * The control-plane node functional-247800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-247800"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 9324: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr] stdout:
* The control-plane node functional-247800 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-247800"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-247800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-247800 apply -f testdata\testsvc.yaml: exit status 1 (20.1794581s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:55398/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-247800 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (850.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-863300 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-863300 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (47.7082732s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-863300
E1205 07:42:23.959263    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-863300: (12.1397774s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-863300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-863300 status --format={{.Host}}: exit status 7 (265.4525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-863300 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-863300 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (12m51.321058s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-863300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-863300" primary control-plane node in "kubernetes-upgrade-863300" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:42:33.219201   13768 out.go:360] Setting OutFile to fd 1304 ...
	I1205 07:42:33.285193   13768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:42:33.285193   13768 out.go:374] Setting ErrFile to fd 1312...
	I1205 07:42:33.285193   13768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:42:33.301200   13768 out.go:368] Setting JSON to false
	I1205 07:42:33.305195   13768 start.go:133] hostinfo: {"hostname":"minikube4","uptime":11811,"bootTime":1764908742,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:42:33.305195   13768 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:42:33.309200   13768 out.go:179] * [kubernetes-upgrade-863300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:42:33.312196   13768 notify.go:221] Checking for updates...
	I1205 07:42:33.314198   13768 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:42:33.319205   13768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:42:33.322192   13768 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:42:33.327197   13768 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:42:33.332208   13768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:42:33.335199   13768 config.go:182] Loaded profile config "kubernetes-upgrade-863300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1205 07:42:33.337200   13768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:42:33.469209   13768 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:42:33.473215   13768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:42:33.766808   13768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-05 07:42:33.743785564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:42:34.128496   13768 out.go:179] * Using the docker driver based on existing profile
	I1205 07:42:34.147903   13768 start.go:309] selected driver: docker
	I1205 07:42:34.147903   13768 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-863300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-863300 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:42:34.147903   13768 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:42:34.198572   13768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:42:34.437669   13768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-05 07:42:34.415944352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:42:34.437669   13768 cni.go:84] Creating CNI manager for ""
	I1205 07:42:34.437669   13768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:42:34.437669   13768 start.go:353] cluster config:
	{Name:kubernetes-upgrade-863300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-863300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:42:34.472124   13768 out.go:179] * Starting "kubernetes-upgrade-863300" primary control-plane node in "kubernetes-upgrade-863300" cluster
	I1205 07:42:34.476133   13768 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:42:34.478596   13768 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:42:34.483214   13768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:42:34.483214   13768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 07:42:34.526584   13768 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 07:42:34.581585   13768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:42:34.581585   13768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:42:34.748855   13768 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 07:42:34.748855   13768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\config.json ...
	I1205 07:42:34.748855   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:42:34.748855   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:42:34.748855   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:42:34.748855   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:42:34.748855   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:42:34.748855   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:42:34.748855   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:42:34.749861   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:42:34.751858   13768 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:42:34.751858   13768 start.go:360] acquireMachinesLock for kubernetes-upgrade-863300: {Name:mk448de1e7d89cb2b2be765e40b6082a6afd56f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:34.752338   13768 start.go:364] duration metric: took 376.8µs to acquireMachinesLock for "kubernetes-upgrade-863300"
	I1205 07:42:34.752368   13768 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:42:34.752368   13768 fix.go:54] fixHost starting: 
	I1205 07:42:34.765863   13768 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-863300 --format={{.State.Status}}
	I1205 07:42:35.142326   13768 fix.go:112] recreateIfNeeded on kubernetes-upgrade-863300: state=Stopped err=<nil>
	W1205 07:42:35.142326   13768 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:42:35.146326   13768 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-863300" ...
	I1205 07:42:35.151899   13768 cli_runner.go:164] Run: docker start kubernetes-upgrade-863300
	I1205 07:42:37.487226   13768 cli_runner.go:217] Completed: docker start kubernetes-upgrade-863300: (2.3352905s)
	I1205 07:42:37.496233   13768 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-863300 --format={{.State.Status}}
	I1205 07:42:37.631618   13768 kic.go:430] container "kubernetes-upgrade-863300" state is running.
	I1205 07:42:37.638512   13768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-863300
	I1205 07:42:37.771026   13768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\config.json ...
	I1205 07:42:37.777238   13768 machine.go:94] provisionDockerMachine start ...
	I1205 07:42:37.786898   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:38.049421   13768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:42:38.050305   13768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60021 <nil> <nil>}
	I1205 07:42:38.050305   13768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:42:38.066351   13768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 07:42:38.366861   13768 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.366861   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:42:38.368866   13768 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.6199542s
	I1205 07:42:38.368866   13768 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:42:38.385069   13768 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.385900   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:42:38.385900   13768 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.6369873s
	I1205 07:42:38.385900   13768 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:42:38.393116   13768 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.394119   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 07:42:38.394119   13768 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.6452062s
	I1205 07:42:38.394119   13768 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 07:42:38.474426   13768 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.475430   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:42:38.475430   13768 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.7265163s
	I1205 07:42:38.475430   13768 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:42:38.540414   13768 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.540414   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 07:42:38.540414   13768 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.7904936s
	I1205 07:42:38.540414   13768 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 07:42:38.558424   13768 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.558424   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 07:42:38.559433   13768 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.8105174s
	I1205 07:42:38.559433   13768 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 07:42:38.600420   13768 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.600420   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 07:42:38.600420   13768 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.8515047s
	I1205 07:42:38.600420   13768 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:42:38.735417   13768 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:42:38.735417   13768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 07:42:38.735417   13768 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.9864993s
	I1205 07:42:38.735417   13768 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 07:42:38.735417   13768 cache.go:87] Successfully saved all images to host disk.
	I1205 07:42:41.262236   13768 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-863300
	
	I1205 07:42:41.262322   13768 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-863300"
	I1205 07:42:41.266041   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:41.338132   13768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:42:41.338877   13768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60021 <nil> <nil>}
	I1205 07:42:41.338993   13768 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-863300 && echo "kubernetes-upgrade-863300" | sudo tee /etc/hostname
	I1205 07:42:41.546152   13768 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-863300
	
	I1205 07:42:41.552245   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:41.616873   13768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:42:41.616873   13768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60021 <nil> <nil>}
	I1205 07:42:41.616873   13768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-863300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-863300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-863300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:42:41.793661   13768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:42:41.794682   13768 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:42:41.794745   13768 ubuntu.go:190] setting up certificates
	I1205 07:42:41.794745   13768 provision.go:84] configureAuth start
	I1205 07:42:41.798523   13768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-863300
	I1205 07:42:41.860790   13768 provision.go:143] copyHostCerts
	I1205 07:42:41.860790   13768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:42:41.860790   13768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:42:41.860790   13768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:42:41.861782   13768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:42:41.861782   13768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:42:41.861782   13768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:42:41.862785   13768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:42:41.862785   13768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:42:41.862785   13768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:42:41.863788   13768 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-863300 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-863300 localhost minikube]
	I1205 07:42:41.882785   13768 provision.go:177] copyRemoteCerts
	I1205 07:42:41.887794   13768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:42:41.892787   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:41.954435   13768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60021 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-863300\id_rsa Username:docker}
	I1205 07:42:42.106251   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:42:42.147317   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 07:42:42.176322   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:42:42.203322   13768 provision.go:87] duration metric: took 408.5024ms to configureAuth
	I1205 07:42:42.203322   13768 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:42:42.203322   13768 config.go:182] Loaded profile config "kubernetes-upgrade-863300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:42:42.207322   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:42.266443   13768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:42:42.267405   13768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60021 <nil> <nil>}
	I1205 07:42:42.267449   13768 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 07:42:42.455815   13768 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 07:42:42.456632   13768 ubuntu.go:71] root file system type: overlay
	I1205 07:42:42.456804   13768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 07:42:42.460993   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:42.520313   13768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:42:42.521344   13768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60021 <nil> <nil>}
	I1205 07:42:42.521344   13768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 07:42:42.724200   13768 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 07:42:42.728636   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:42.789584   13768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:42:42.790039   13768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60021 <nil> <nil>}
	I1205 07:42:42.790080   13768 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 07:42:42.991755   13768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:42:42.991755   13768 machine.go:97] duration metric: took 5.2144347s to provisionDockerMachine
	I1205 07:42:42.991755   13768 start.go:293] postStartSetup for "kubernetes-upgrade-863300" (driver="docker")
	I1205 07:42:42.991755   13768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:42:42.996688   13768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:42:43.000299   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:43.049833   13768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60021 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-863300\id_rsa Username:docker}
	I1205 07:42:43.189344   13768 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:42:43.199665   13768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:42:43.199665   13768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:42:43.199665   13768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 07:42:43.199665   13768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 07:42:43.201092   13768 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 07:42:43.208789   13768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:42:43.223747   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 07:42:43.251744   13768 start.go:296] duration metric: took 259.9848ms for postStartSetup
	I1205 07:42:43.257162   13768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:42:43.260005   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:43.315286   13768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60021 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-863300\id_rsa Username:docker}
	I1205 07:42:43.462207   13768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:42:43.474440   13768 fix.go:56] duration metric: took 8.721935s for fixHost
	I1205 07:42:43.474440   13768 start.go:83] releasing machines lock for "kubernetes-upgrade-863300", held for 8.721935s
	I1205 07:42:43.479499   13768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-863300
	I1205 07:42:43.538278   13768 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 07:42:43.542757   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:43.544045   13768 ssh_runner.go:195] Run: cat /version.json
	I1205 07:42:43.549305   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:43.601410   13768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60021 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-863300\id_rsa Username:docker}
	I1205 07:42:43.603409   13768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60021 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-863300\id_rsa Username:docker}
	I1205 07:42:43.727688   13768 ssh_runner.go:195] Run: systemctl --version
	W1205 07:42:43.732080   13768 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 07:42:43.744428   13768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:42:43.755467   13768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:42:43.759470   13768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:42:43.773471   13768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:42:43.773471   13768 start.go:496] detecting cgroup driver to use...
	I1205 07:42:43.773471   13768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:42:43.773471   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:42:43.798474   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:42:43.822251   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 07:42:43.836259   13768 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 07:42:43.836332   13768 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 07:42:43.842400   13768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:42:43.846757   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:42:43.867464   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:42:43.885495   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:42:43.905478   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:42:43.940320   13768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:42:43.963874   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:42:43.991566   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:42:44.025295   13768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:42:44.045534   13768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:42:44.061528   13768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:42:44.083798   13768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:42:44.244235   13768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:42:44.370081   13768 start.go:496] detecting cgroup driver to use...
	I1205 07:42:44.370138   13768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:42:44.375917   13768 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 07:42:44.406569   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:42:44.432669   13768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:42:44.636790   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:42:44.666631   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:42:44.695585   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:42:44.745943   13768 ssh_runner.go:195] Run: which cri-dockerd
	I1205 07:42:44.763663   13768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 07:42:44.783520   13768 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 07:42:44.811509   13768 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 07:42:45.004442   13768 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 07:42:45.107437   13768 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 07:42:45.107662   13768 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 07:42:45.134551   13768 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 07:42:45.165204   13768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:42:45.329542   13768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 07:42:51.793588   13768 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.4639441s)
	I1205 07:42:51.798586   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:42:51.820589   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 07:42:51.853676   13768 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 07:42:51.893346   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:42:51.915342   13768 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 07:42:52.065350   13768 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 07:42:52.207473   13768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:42:52.367537   13768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 07:42:52.401405   13768 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 07:42:52.422407   13768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:42:52.574515   13768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 07:42:52.717465   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:42:52.770207   13768 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 07:42:52.774207   13768 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 07:42:52.781212   13768 start.go:564] Will wait 60s for crictl version
	I1205 07:42:52.785206   13768 ssh_runner.go:195] Run: which crictl
	I1205 07:42:52.796216   13768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:42:52.869012   13768 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 07:42:52.872003   13768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:42:52.924001   13768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:42:52.974011   13768 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 07:42:52.979995   13768 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-863300 dig +short host.docker.internal
	I1205 07:42:53.117003   13768 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 07:42:53.122027   13768 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 07:42:53.130007   13768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:42:53.151995   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:42:53.208004   13768 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-863300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-863300 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:42:53.209003   13768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:42:53.211999   13768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 07:42:53.241010   13768 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 07:42:53.241010   13768 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1205 07:42:53.241010   13768 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:42:53.254002   13768 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:42:53.258013   13768 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:42:53.262015   13768 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:42:53.263014   13768 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:42:53.267015   13768 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:42:53.267015   13768 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:42:53.272029   13768 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:42:53.273029   13768 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:42:53.277033   13768 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:42:53.279000   13768 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:42:53.283038   13768 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:42:53.284011   13768 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:42:53.290001   13768 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:42:53.290001   13768 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:42:53.293004   13768 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:42:53.298999   13768 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1205 07:42:53.330002   13768 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:42:53.390727   13768 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:42:53.454013   13768 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:42:53.514703   13768 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:42:53.568708   13768 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:42:53.619708   13768 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:42:53.680672   13768 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:42:53.735669   13768 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1205 07:42:53.823066   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:42:53.841139   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:42:53.843149   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 07:42:53.866153   13768 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1205 07:42:53.866153   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:42:53.866153   13768 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:42:53.871156   13768 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:42:53.880149   13768 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1205 07:42:53.880149   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:42:53.880149   13768 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:42:53.883143   13768 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1205 07:42:53.883143   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:42:53.883143   13768 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:42:53.885160   13768 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:42:53.888147   13768 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1205 07:42:53.897161   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 07:42:53.916144   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:42:53.930141   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:42:53.937157   13768 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:42:53.946154   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:42:53.959151   13768 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:42:53.961151   13768 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:42:53.967169   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:42:53.968152   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:42:53.978142   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:42:54.038732   13768 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1205 07:42:54.038732   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:42:54.038732   13768 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:42:54.045094   13768 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1205 07:42:54.045094   13768 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1205 07:42:54.045094   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:42:54.045094   13768 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:42:54.045094   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:42:54.045094   13768 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:42:54.045094   13768 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:42:54.045094   13768 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:42:54.045094   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1205 07:42:54.045094   13768 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:42:54.045094   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1205 07:42:54.045094   13768 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:42:54.046104   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1205 07:42:54.050921   13768 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:42:54.050921   13768 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:42:54.169876   13768 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1205 07:42:54.169876   13768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:42:54.169876   13768 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:42:54.173891   13768 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:42:54.174889   13768 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:42:54.176896   13768 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:42:54.176896   13768 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:42:54.176896   13768 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:42:54.182876   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:42:54.182876   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:42:54.183881   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:42:54.259300   13768 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:42:54.259300   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1205 07:42:54.446703   13768 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:42:54.455463   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:42:54.490146   13768 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:42:54.490146   13768 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:42:54.490146   13768 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:42:54.490146   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1205 07:42:54.491146   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1205 07:42:54.491146   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1205 07:42:54.564100   13768 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1205 07:42:54.564100   13768 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:42:54.564100   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1205 07:42:55.808326   13768 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:42:55.808326   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1205 07:43:00.410541   13768 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (4.6021419s)
	I1205 07:43:00.410541   13768 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:43:00.410541   13768 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:43:00.410541   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1205 07:43:04.123782   13768 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (3.7131834s)
	I1205 07:43:04.123782   13768 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1205 07:43:04.123782   13768 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:43:04.123782   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1205 07:43:05.525565   13768 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (1.4017605s)
	I1205 07:43:05.525565   13768 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:43:05.525565   13768 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:43:05.525565   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1205 07:43:09.188425   13768 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (3.6628026s)
	I1205 07:43:09.188425   13768 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:43:09.188425   13768 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:43:09.189454   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1205 07:43:10.762870   13768 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.5733913s)
	I1205 07:43:10.762870   13768 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:43:10.762870   13768 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:43:10.762870   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1205 07:43:12.852207   13768 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (2.0893051s)
	I1205 07:43:12.852207   13768 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1205 07:43:12.852842   13768 cache_images.go:125] Successfully loaded all cached images
	I1205 07:43:12.852906   13768 cache_images.go:94] duration metric: took 19.6115577s to LoadCachedImages
	I1205 07:43:12.852958   13768 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 07:43:12.853008   13768 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-863300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-863300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:43:12.856843   13768 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 07:43:12.948070   13768 cni.go:84] Creating CNI manager for ""
	I1205 07:43:12.948070   13768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:43:12.948070   13768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:43:12.948070   13768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-863300 NodeName:kubernetes-upgrade-863300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:43:12.948070   13768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-863300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:43:12.956929   13768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:43:12.970801   13768 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:43:12.974791   13768 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:43:12.987781   13768 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1205 07:43:12.987781   13768 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1205 07:43:12.987781   13768 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1205 07:43:12.992796   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:43:12.993788   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:43:12.993788   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:43:13.001790   13768 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:43:13.001790   13768 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:43:13.001790   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1205 07:43:13.001790   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1205 07:43:13.025797   13768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:43:13.108798   13768 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:43:13.108798   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1205 07:43:15.144623   13768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:43:15.191601   13768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I1205 07:43:15.218687   13768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:43:15.239886   13768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1205 07:43:15.267382   13768 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:43:15.275385   13768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:43:15.296389   13768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:43:15.470981   13768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:43:15.493550   13768 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300 for IP: 192.168.85.2
	I1205 07:43:15.493550   13768 certs.go:195] generating shared ca certs ...
	I1205 07:43:15.493550   13768 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:43:15.494773   13768 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 07:43:15.495189   13768 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 07:43:15.495300   13768 certs.go:257] generating profile certs ...
	I1205 07:43:15.495619   13768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\client.key
	I1205 07:43:15.495619   13768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\apiserver.key.2fd49bbf
	I1205 07:43:15.496338   13768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\proxy-client.key
	I1205 07:43:15.496670   13768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 07:43:15.497337   13768 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 07:43:15.497362   13768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 07:43:15.497362   13768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 07:43:15.497362   13768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 07:43:15.497362   13768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 07:43:15.498101   13768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 07:43:15.498893   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:43:15.531284   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:43:15.570710   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:43:15.647264   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:43:15.682263   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 07:43:15.709299   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:43:15.741356   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:43:15.773312   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:43:15.803353   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:43:15.830318   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 07:43:15.856891   13768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 07:43:15.889259   13768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:43:15.974272   13768 ssh_runner.go:195] Run: openssl version
	I1205 07:43:15.992128   13768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:43:16.094804   13768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:43:16.116897   13768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:43:16.125170   13768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:43:16.129173   13768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:43:16.191601   13768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:43:16.211847   13768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 07:43:16.228825   13768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 07:43:16.246260   13768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 07:43:16.253979   13768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 07:43:16.256989   13768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 07:43:16.305113   13768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:43:16.321112   13768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 07:43:16.337511   13768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 07:43:16.356512   13768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 07:43:16.365511   13768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 07:43:16.370514   13768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 07:43:16.428147   13768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:43:16.450681   13768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:43:16.462701   13768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:43:16.520128   13768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:43:16.574400   13768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:43:16.621181   13768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:43:16.688392   13768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:43:16.739434   13768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:43:16.795759   13768 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-863300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-863300 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:43:16.798748   13768 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 07:43:16.835587   13768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:43:16.850904   13768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:43:16.850904   13768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:43:16.854479   13768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:43:16.867464   13768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:43:16.872473   13768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-863300
	I1205 07:43:16.921469   13768 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-863300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:43:16.921469   13768 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-863300" cluster setting kubeconfig missing "kubernetes-upgrade-863300" context setting]
	I1205 07:43:16.922465   13768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:43:16.941120   13768 kapi.go:59] client config for kubernetes-upgrade-863300: &rest.Config{Host:"https://127.0.0.1:60025", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-863300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-863300/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6443d7340), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 07:43:16.942556   13768 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 07:43:16.942616   13768 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 07:43:16.942616   13768 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 07:43:16.942616   13768 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 07:43:16.942616   13768 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 07:43:16.947729   13768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:43:16.963309   13768 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 07:42:03.789947260 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 07:43:15.245406657 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-863300"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1205 07:43:16.963309   13768 kubeadm.go:1161] stopping kube-system containers ...
	I1205 07:43:16.966308   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 07:43:17.003899   13768 docker.go:484] Stopping containers: [e22877b7e2b7 4415be792939 f3c5649a2875 bb0568ac426f 214cf8790e4e fb8c4a45dbb4 9734ecb075e5 7662292743be]
	I1205 07:43:17.010164   13768 ssh_runner.go:195] Run: docker stop e22877b7e2b7 4415be792939 f3c5649a2875 bb0568ac426f 214cf8790e4e fb8c4a45dbb4 9734ecb075e5 7662292743be
	I1205 07:43:17.044314   13768 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 07:43:17.075285   13768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:43:17.090287   13768 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  5 07:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  5 07:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  5 07:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec  5 07:42 /etc/kubernetes/scheduler.conf
	
	I1205 07:43:17.094280   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:43:17.110290   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:43:17.127290   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:43:17.139279   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:43:17.144308   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:43:17.163287   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:43:17.178282   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:43:17.182298   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:43:17.198298   13768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:43:17.215283   13768 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:43:17.282295   13768 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:43:18.029140   13768 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:43:18.324491   13768 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:43:18.403076   13768 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:43:18.503317   13768 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:43:18.508149   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:19.009951   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:19.507237   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:20.007541   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:20.509593   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:21.008818   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:21.509667   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:22.008817   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:22.508204   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:23.009772   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:23.509130   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:24.008011   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:24.508501   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:25.006839   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:25.507564   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:26.010885   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:26.508540   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:27.008381   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:27.508020   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:28.008671   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:28.508793   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:29.008054   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:29.507723   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:30.008833   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:30.508057   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:31.010184   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:31.508423   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:32.010115   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:32.508544   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:33.007334   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:33.507233   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:34.008457   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:34.507516   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:35.010107   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:35.509387   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:36.008584   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:36.509583   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:37.008325   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:37.507859   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:38.009057   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:38.508774   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:39.009023   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:39.509123   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:40.007940   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:40.508629   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:41.007547   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:41.508578   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:42.008831   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:42.509115   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:43.008678   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:43.508370   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:44.009014   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:44.509256   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:45.008823   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:45.508658   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:46.010285   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:46.508825   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:47.008353   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:47.508113   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:48.008875   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:48.511164   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:49.007380   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:49.508936   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:50.008862   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:50.508496   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:51.007516   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:51.509424   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:52.008212   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:52.508396   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:53.009307   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:53.507138   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:54.008969   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:54.509466   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:55.009437   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:55.510290   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:56.008658   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:56.509906   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:57.009019   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:57.509325   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:58.008643   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:58.508734   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:59.008257   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:43:59.508898   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:00.010330   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:00.508557   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:01.009127   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:01.508136   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:02.008381   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:02.508737   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:03.010139   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:03.509241   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:04.009521   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:04.508575   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:05.008588   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:05.508661   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:06.011080   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:06.507182   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:07.009051   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:07.509509   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:08.009059   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:08.507767   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:09.009485   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:09.508920   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:10.009006   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:10.509314   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:11.009133   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:11.509192   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:12.008655   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:12.508828   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:13.009959   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:13.508805   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:14.009852   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:14.509112   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:15.008813   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:15.508612   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:16.010237   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:16.508763   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:17.010031   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:17.508070   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:18.010053   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:18.508192   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:18.539203   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:18.543195   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:18.575934   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:18.580012   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:18.609516   13768 logs.go:282] 0 containers: []
	W1205 07:44:18.609516   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:18.613994   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:18.644681   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:18.647681   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:18.676157   13768 logs.go:282] 0 containers: []
	W1205 07:44:18.676157   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:18.680031   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:18.715790   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:18.718789   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:18.751508   13768 logs.go:282] 0 containers: []
	W1205 07:44:18.751508   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:18.755241   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:18.784800   13768 logs.go:282] 0 containers: []
	W1205 07:44:18.784800   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:18.785803   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:18.785803   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:18.863259   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:18.863259   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:18.935787   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:18.935787   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:18.987409   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:18.987498   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:19.019244   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:19.019289   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:19.057338   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:19.057338   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:19.155029   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:19.155029   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:19.155029   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:19.243800   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:19.243800   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:19.292526   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:19.292526   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:21.831507   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:21.854110   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:21.889723   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:21.893466   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:21.924255   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:21.928395   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:21.958172   13768 logs.go:282] 0 containers: []
	W1205 07:44:21.958172   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:21.962307   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:21.991349   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:21.995002   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:22.023586   13768 logs.go:282] 0 containers: []
	W1205 07:44:22.023586   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:22.027614   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:22.056064   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:22.060217   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:22.090560   13768 logs.go:282] 0 containers: []
	W1205 07:44:22.090560   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:22.095074   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:22.124226   13768 logs.go:282] 0 containers: []
	W1205 07:44:22.124226   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:22.124226   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:22.124226   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:22.194535   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:22.194535   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:22.235924   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:22.235924   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:22.325270   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:22.325314   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:22.325314   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:22.374089   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:22.374089   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:22.415304   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:22.415826   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:22.458541   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:22.458541   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:22.496482   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:22.496482   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:22.550218   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:22.550272   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:25.084288   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:25.106213   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:25.140873   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:25.146212   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:25.178498   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:25.182600   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:25.226595   13768 logs.go:282] 0 containers: []
	W1205 07:44:25.226595   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:25.230366   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:25.263929   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:25.270909   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:25.299983   13768 logs.go:282] 0 containers: []
	W1205 07:44:25.299983   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:25.304035   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:25.339766   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:25.344489   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:25.372126   13768 logs.go:282] 0 containers: []
	W1205 07:44:25.372126   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:25.376866   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:25.406894   13768 logs.go:282] 0 containers: []
	W1205 07:44:25.406894   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:25.406894   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:25.406894   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:25.471363   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:25.471363   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:25.513945   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:25.513945   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:25.599426   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:25.599955   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:25.599955   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:25.642221   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:25.642221   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:25.687093   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:25.687093   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:25.716000   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:25.716000   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:25.760547   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:25.760547   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:25.799851   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:25.800369   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:28.356326   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:28.381105   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:28.413178   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:28.416429   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:28.446636   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:28.449604   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:28.489670   13768 logs.go:282] 0 containers: []
	W1205 07:44:28.489670   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:28.493628   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:28.534185   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:28.538824   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:28.575797   13768 logs.go:282] 0 containers: []
	W1205 07:44:28.575797   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:28.579366   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:28.611037   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:28.614034   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:28.641048   13768 logs.go:282] 0 containers: []
	W1205 07:44:28.641048   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:28.644047   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:28.675320   13768 logs.go:282] 0 containers: []
	W1205 07:44:28.675320   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:28.675320   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:28.675320   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:28.767956   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:28.767956   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:28.767956   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:28.818814   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:28.818814   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:28.866299   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:28.866362   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:28.911568   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:28.911568   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:28.950212   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:28.950212   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:28.990341   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:28.990431   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:29.019584   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:29.019584   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:29.068823   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:29.068823   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:31.634294   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:31.666428   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:31.711312   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:31.714307   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:31.751979   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:31.756998   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:31.858323   13768 logs.go:282] 0 containers: []
	W1205 07:44:31.858323   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:31.862316   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:31.894465   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:31.898471   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:31.930074   13768 logs.go:282] 0 containers: []
	W1205 07:44:31.930074   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:31.936079   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:31.977473   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:31.981492   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:32.012591   13768 logs.go:282] 0 containers: []
	W1205 07:44:32.012591   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:32.017904   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:32.055993   13768 logs.go:282] 0 containers: []
	W1205 07:44:32.055993   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:32.055993   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:32.055993   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:32.130673   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:32.130673   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:32.240985   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:32.240985   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:32.241983   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:32.290395   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:32.290395   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:32.346047   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:32.346047   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:32.403091   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:32.403091   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:32.445104   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:32.445104   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:32.491316   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:32.491316   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:32.533759   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:32.533808   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:35.078087   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:35.098812   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:35.141917   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:35.145886   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:35.182755   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:35.186896   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:35.217205   13768 logs.go:282] 0 containers: []
	W1205 07:44:35.217205   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:35.220196   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:35.254665   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:35.257672   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:35.282671   13768 logs.go:282] 0 containers: []
	W1205 07:44:35.282671   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:35.286664   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:35.317510   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:35.322128   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:35.352700   13768 logs.go:282] 0 containers: []
	W1205 07:44:35.352700   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:35.356279   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:35.385119   13768 logs.go:282] 0 containers: []
	W1205 07:44:35.385119   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:35.385119   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:35.385119   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:35.424585   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:35.424585   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:35.520771   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:35.520771   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:35.520771   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:35.565150   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:35.565150   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:35.607863   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:35.608843   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:35.681535   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:35.682532   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:35.730730   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:35.730730   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:35.769401   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:35.769401   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:35.801107   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:35.801107   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:38.360224   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:38.382255   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:38.414338   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:38.418174   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:38.452897   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:38.457469   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:38.489521   13768 logs.go:282] 0 containers: []
	W1205 07:44:38.489521   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:38.493013   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:38.522827   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:38.526775   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:38.560904   13768 logs.go:282] 0 containers: []
	W1205 07:44:38.560904   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:38.564534   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:38.596330   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:38.600325   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:38.630708   13768 logs.go:282] 0 containers: []
	W1205 07:44:38.630708   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:38.634508   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:38.665392   13768 logs.go:282] 0 containers: []
	W1205 07:44:38.665392   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:38.665392   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:38.665392   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:38.713399   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:38.713399   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:38.754403   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:38.754403   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:38.789032   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:38.789104   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:38.857786   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:38.857786   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:38.898869   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:38.898869   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:38.981321   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:38.981321   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:38.981321   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:39.032242   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:39.032286   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:39.062859   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:39.062859   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:41.622370   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:41.650332   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:41.684011   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:41.689002   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:41.731181   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:41.735565   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:41.773385   13768 logs.go:282] 0 containers: []
	W1205 07:44:41.773385   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:41.779608   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:41.816993   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:41.821451   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:41.854159   13768 logs.go:282] 0 containers: []
	W1205 07:44:41.854159   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:41.858138   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:41.894511   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:41.898497   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:41.941069   13768 logs.go:282] 0 containers: []
	W1205 07:44:41.941069   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:41.945367   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:41.978557   13768 logs.go:282] 0 containers: []
	W1205 07:44:41.978557   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:41.978557   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:41.978557   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:42.085090   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:42.085090   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:42.085090   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:42.135898   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:42.136425   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:42.198443   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:42.198487   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:42.269158   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:42.269158   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:42.308769   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:42.308769   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:42.353591   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:42.353591   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:42.403582   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:42.403582   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:42.447862   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:42.447939   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:44.988801   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:45.011842   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:45.042398   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:45.046548   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:45.077676   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:45.082075   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:45.114103   13768 logs.go:282] 0 containers: []
	W1205 07:44:45.114182   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:45.118217   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:45.150692   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:45.154270   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:45.185003   13768 logs.go:282] 0 containers: []
	W1205 07:44:45.185083   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:45.188431   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:45.220114   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:45.223748   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:45.254651   13768 logs.go:282] 0 containers: []
	W1205 07:44:45.254731   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:45.258310   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:45.292056   13768 logs.go:282] 0 containers: []
	W1205 07:44:45.292109   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:45.292139   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:45.292139   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:45.350163   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:45.350163   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:45.393503   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:45.393503   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:45.444882   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:45.444965   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:45.475321   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:45.475321   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:45.531046   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:45.531046   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:45.572639   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:45.572639   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:45.638102   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:45.638102   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:45.678093   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:45.678093   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:45.768045   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:48.272343   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:48.296385   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:48.329459   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:48.333434   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:48.366376   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:48.371888   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:48.409276   13768 logs.go:282] 0 containers: []
	W1205 07:44:48.409276   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:48.414771   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:48.447534   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:48.451038   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:48.480628   13768 logs.go:282] 0 containers: []
	W1205 07:44:48.480706   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:48.485637   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:48.516056   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:48.519214   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:48.552158   13768 logs.go:282] 0 containers: []
	W1205 07:44:48.552212   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:48.556238   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:48.586007   13768 logs.go:282] 0 containers: []
	W1205 07:44:48.586007   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:48.586090   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:48.586090   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:48.633062   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:48.634053   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:48.674360   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:48.674360   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:48.701347   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:48.701347   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:48.789873   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:48.789873   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:48.789873   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:48.828925   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:48.828925   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:48.878800   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:48.878800   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:48.937727   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:48.937789   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:49.001866   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:49.001866   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:51.546824   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:51.571357   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:51.602187   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:51.606890   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:51.636406   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:51.639979   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:51.671681   13768 logs.go:282] 0 containers: []
	W1205 07:44:51.671681   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:51.676098   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:51.705271   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:51.709261   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:51.741269   13768 logs.go:282] 0 containers: []
	W1205 07:44:51.741269   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:51.745404   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:51.789160   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:51.791825   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:51.820544   13768 logs.go:282] 0 containers: []
	W1205 07:44:51.820544   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:51.825039   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:51.855842   13768 logs.go:282] 0 containers: []
	W1205 07:44:51.855842   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:51.855842   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:51.855842   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:51.897539   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:51.897539   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:51.979071   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:51.979071   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:51.979071   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:52.020780   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:52.020780   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:52.059980   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:52.059980   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:52.102504   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:52.103498   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:52.153893   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:52.153893   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:52.222138   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:52.223139   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:52.261454   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:52.261561   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:54.797026   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:54.818576   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:54.851761   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:54.856407   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:54.890175   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:54.893924   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:54.923478   13768 logs.go:282] 0 containers: []
	W1205 07:44:54.923478   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:54.928481   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:54.960370   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:54.965697   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:54.996820   13768 logs.go:282] 0 containers: []
	W1205 07:44:54.996820   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:55.000626   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:55.031412   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:55.035414   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:55.066664   13768 logs.go:282] 0 containers: []
	W1205 07:44:55.066664   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:55.070323   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:55.102301   13768 logs.go:282] 0 containers: []
	W1205 07:44:55.102301   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:55.102301   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:55.102301   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:55.141302   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:55.141302   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:55.195328   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:55.195328   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:55.229899   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:55.230906   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:55.258500   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:55.259078   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:55.307565   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:55.307565   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:55.369575   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:55.370575   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:55.459384   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:55.459384   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:55.459384   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:44:55.497420   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:55.497420   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:58.045430   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:44:58.070414   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:44:58.106636   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:44:58.111281   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:44:58.147687   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:44:58.152268   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:44:58.183589   13768 logs.go:282] 0 containers: []
	W1205 07:44:58.184598   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:44:58.188606   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:44:58.245584   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:44:58.248574   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:44:58.283554   13768 logs.go:282] 0 containers: []
	W1205 07:44:58.283554   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:44:58.289106   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:44:58.319736   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:44:58.323997   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:44:58.362321   13768 logs.go:282] 0 containers: []
	W1205 07:44:58.362321   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:44:58.365771   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:44:58.399352   13768 logs.go:282] 0 containers: []
	W1205 07:44:58.399352   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:44:58.399352   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:44:58.399352   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:44:58.470437   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:44:58.470437   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:44:58.559769   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:44:58.559802   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:44:58.559851   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:44:58.610811   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:44:58.610811   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:44:58.674373   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:44:58.674373   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:44:58.712216   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:44:58.712277   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:44:58.741291   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:44:58.741291   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:44:58.799854   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:44:58.799854   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:44:58.837554   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:44:58.838547   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:01.390525   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:01.418910   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:01.453324   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:01.456918   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:01.494393   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:01.498374   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:01.536809   13768 logs.go:282] 0 containers: []
	W1205 07:45:01.536898   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:01.541897   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:01.582197   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:01.585207   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:01.614207   13768 logs.go:282] 0 containers: []
	W1205 07:45:01.615198   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:01.618194   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:01.654195   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:01.657195   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:01.687207   13768 logs.go:282] 0 containers: []
	W1205 07:45:01.687207   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:01.691197   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:01.723198   13768 logs.go:282] 0 containers: []
	W1205 07:45:01.723198   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:01.723198   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:01.723198   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:01.773199   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:01.773199   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:01.812215   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:01.812215   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:01.841370   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:01.841370   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:01.909459   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:01.909459   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:01.952546   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:01.952546   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:02.087191   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:02.087191   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:02.087191   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:02.139563   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:02.139563   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:02.180005   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:02.180005   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:04.732441   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:04.756441   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:04.788446   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:04.791445   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:04.823452   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:04.827456   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:04.855746   13768 logs.go:282] 0 containers: []
	W1205 07:45:04.855746   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:04.859494   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:04.891087   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:04.894972   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:04.926917   13768 logs.go:282] 0 containers: []
	W1205 07:45:04.926917   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:04.931564   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:04.965499   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:04.970707   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:05.001658   13768 logs.go:282] 0 containers: []
	W1205 07:45:05.001658   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:05.005922   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:05.034690   13768 logs.go:282] 0 containers: []
	W1205 07:45:05.034690   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:05.034773   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:05.034773   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:05.100537   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:05.100537   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:05.147471   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:05.147471   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:05.199120   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:05.200121   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:05.238121   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:05.238121   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:05.272125   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:05.272125   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:05.299129   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:05.299129   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:05.355870   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:05.355870   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:05.457222   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:05.457222   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:05.457222   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:08.025306   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:08.105959   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:08.140238   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:08.143903   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:08.176936   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:08.180911   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:08.213056   13768 logs.go:282] 0 containers: []
	W1205 07:45:08.213056   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:08.217189   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:08.248820   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:08.251914   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:08.281711   13768 logs.go:282] 0 containers: []
	W1205 07:45:08.281711   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:08.285184   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:08.329439   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:08.335148   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:08.377273   13768 logs.go:282] 0 containers: []
	W1205 07:45:08.377273   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:08.380272   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:08.412279   13768 logs.go:282] 0 containers: []
	W1205 07:45:08.412279   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:08.412279   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:08.412279   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:08.478523   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:08.479043   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:08.586559   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:08.586559   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:08.586559   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:08.638579   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:08.638579   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:08.692574   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:08.692574   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:08.720978   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:08.720978   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:08.773577   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:08.773577   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:08.810407   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:08.810407   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:08.855751   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:08.855820   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:11.398046   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:11.420450   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:11.454665   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:11.457622   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:11.488235   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:11.491891   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:11.519886   13768 logs.go:282] 0 containers: []
	W1205 07:45:11.519958   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:11.523313   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:11.553084   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:11.556629   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:11.587268   13768 logs.go:282] 0 containers: []
	W1205 07:45:11.587337   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:11.590755   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:11.622142   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:11.626522   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:11.660097   13768 logs.go:282] 0 containers: []
	W1205 07:45:11.660097   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:11.664394   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:11.695396   13768 logs.go:282] 0 containers: []
	W1205 07:45:11.695396   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:11.695396   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:11.695396   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:11.783095   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:11.783095   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:11.783095   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:11.831625   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:11.832140   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:11.873044   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:11.873044   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:11.916512   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:11.916512   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:11.958894   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:11.958946   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:11.989093   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:11.989150   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:12.047103   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:12.047103   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:12.109686   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:12.109686   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:14.652007   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:14.675224   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:14.710342   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:14.714557   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:14.747530   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:14.751265   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:14.783694   13768 logs.go:282] 0 containers: []
	W1205 07:45:14.783753   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:14.787252   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:14.817781   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:14.820635   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:14.852489   13768 logs.go:282] 0 containers: []
	W1205 07:45:14.852576   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:14.856277   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:14.887647   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:14.890964   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:14.919887   13768 logs.go:282] 0 containers: []
	W1205 07:45:14.919887   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:14.924371   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:14.955918   13768 logs.go:282] 0 containers: []
	W1205 07:45:14.955965   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:14.956015   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:14.956015   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:15.024881   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:15.024881   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:15.065252   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:15.065252   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:15.149626   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:15.149661   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:15.149661   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:15.199657   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:15.199657   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:15.245273   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:15.245273   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:15.280893   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:15.280930   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:15.307670   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:15.307670   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:15.358497   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:15.358497   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:17.910001   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:17.933867   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:17.966018   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:17.969893   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:17.999868   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:18.003820   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:18.033296   13768 logs.go:282] 0 containers: []
	W1205 07:45:18.033296   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:18.037471   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:18.072045   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:18.075270   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:18.103914   13768 logs.go:282] 0 containers: []
	W1205 07:45:18.103914   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:18.108233   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:18.137908   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:18.141559   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:18.170802   13768 logs.go:282] 0 containers: []
	W1205 07:45:18.170802   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:18.175112   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:18.203149   13768 logs.go:282] 0 containers: []
	W1205 07:45:18.203149   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:18.203149   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:18.203149   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:18.270509   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:18.270509   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:18.312895   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:18.312895   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:18.363984   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:18.364509   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:18.399702   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:18.399745   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:18.486151   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:18.486151   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:18.486151   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:18.532087   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:18.532087   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:18.575024   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:18.575024   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:18.604115   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:18.604115   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:21.159563   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:21.184702   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:21.219371   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:21.224598   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:21.258041   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:21.262343   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:21.299275   13768 logs.go:282] 0 containers: []
	W1205 07:45:21.299275   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:21.303325   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:21.336364   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:21.340155   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:21.367206   13768 logs.go:282] 0 containers: []
	W1205 07:45:21.367206   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:21.371602   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:21.405269   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:21.409364   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:21.437826   13768 logs.go:282] 0 containers: []
	W1205 07:45:21.437826   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:21.443152   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:21.473441   13768 logs.go:282] 0 containers: []
	W1205 07:45:21.473441   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:21.473441   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:21.473441   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:21.511460   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:21.511460   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:21.563934   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:21.563934   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:21.616512   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:21.616592   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:21.662323   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:21.662323   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:21.693186   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:21.693186   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:21.756592   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:21.756592   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:21.859868   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:21.859868   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:21.859868   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:21.894876   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:21.894876   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:24.469993   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:24.503494   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:24.545480   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:24.548492   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:24.580480   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:24.584478   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:24.617481   13768 logs.go:282] 0 containers: []
	W1205 07:45:24.617481   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:24.621482   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:24.669845   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:24.675752   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:24.710579   13768 logs.go:282] 0 containers: []
	W1205 07:45:24.710631   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:24.715259   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:24.751540   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:24.754531   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:24.788527   13768 logs.go:282] 0 containers: []
	W1205 07:45:24.788527   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:24.792539   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:24.821539   13768 logs.go:282] 0 containers: []
	W1205 07:45:24.821539   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:24.821539   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:24.822530   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:24.893282   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:24.893282   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:24.957041   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:24.957041   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:25.009480   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:25.009480   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:25.055467   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:25.055467   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:25.086476   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:25.086476   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:25.147475   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:25.147475   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:25.191470   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:25.191470   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:25.295614   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:25.295614   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:25.295614   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:27.852791   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:27.872793   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:27.911176   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:27.915090   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:27.958618   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:27.963601   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:27.997609   13768 logs.go:282] 0 containers: []
	W1205 07:45:27.997609   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:28.000602   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:28.030603   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:28.033599   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:28.067397   13768 logs.go:282] 0 containers: []
	W1205 07:45:28.067397   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:28.071679   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:28.106743   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:28.111477   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:28.144148   13768 logs.go:282] 0 containers: []
	W1205 07:45:28.144148   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:28.147149   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:28.177691   13768 logs.go:282] 0 containers: []
	W1205 07:45:28.177691   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:28.177691   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:28.177691   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:28.285747   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:28.285747   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:28.329233   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:28.329233   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:28.418531   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:28.418531   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:28.418531   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:28.462348   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:28.462348   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:28.513188   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:28.513188   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:28.541333   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:28.541378   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:28.595526   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:28.595526   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:28.642589   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:28.642589   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:31.186824   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:31.208322   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:31.239132   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:31.243120   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:31.272129   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:31.275119   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:31.306119   13768 logs.go:282] 0 containers: []
	W1205 07:45:31.306119   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:31.310122   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:31.344128   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:31.348123   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:31.383121   13768 logs.go:282] 0 containers: []
	W1205 07:45:31.383121   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:31.386119   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:31.418062   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:31.422326   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:31.450904   13768 logs.go:282] 0 containers: []
	W1205 07:45:31.451906   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:31.454903   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:31.484167   13768 logs.go:282] 0 containers: []
	W1205 07:45:31.484167   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:31.484167   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:31.484167   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:31.531802   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:31.531802   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:31.572793   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:31.572793   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:31.608169   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:31.608169   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:31.640919   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:31.641010   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:31.703739   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:31.703739   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:31.743333   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:31.743378   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:31.783914   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:31.783914   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:31.835888   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:31.835888   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:31.931533   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:34.436416   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:34.460127   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:34.492489   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:34.496636   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:34.530919   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:34.534923   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:34.565429   13768 logs.go:282] 0 containers: []
	W1205 07:45:34.565482   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:34.569103   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:34.601643   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:34.605243   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:34.640131   13768 logs.go:282] 0 containers: []
	W1205 07:45:34.640214   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:34.644177   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:34.683067   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:34.686060   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:34.718989   13768 logs.go:282] 0 containers: []
	W1205 07:45:34.719033   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:34.724779   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:34.765462   13768 logs.go:282] 0 containers: []
	W1205 07:45:34.765462   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:34.765462   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:34.765462   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:34.800586   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:34.801119   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:34.877376   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:34.877376   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:34.936150   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:34.936150   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:34.982138   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:34.982138   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:35.039335   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:35.039417   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:35.082284   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:35.082284   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:35.191878   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:35.191931   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:35.191990   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:35.242392   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:35.242392   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:37.790324   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:37.813138   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:37.848290   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:37.854445   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:37.890592   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:37.894649   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:37.931172   13768 logs.go:282] 0 containers: []
	W1205 07:45:37.931172   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:37.935379   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:37.966350   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:37.970017   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:38.000283   13768 logs.go:282] 0 containers: []
	W1205 07:45:38.000364   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:38.004241   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:38.038010   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:38.042210   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:38.074196   13768 logs.go:282] 0 containers: []
	W1205 07:45:38.074196   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:38.077782   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:38.115822   13768 logs.go:282] 0 containers: []
	W1205 07:45:38.115862   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:38.115906   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:38.115942   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:38.148829   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:38.149843   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:38.207848   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:38.208379   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:38.275879   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:38.275933   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:38.322064   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:38.322109   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:38.362269   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:38.362269   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:38.463268   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:38.463268   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:38.463268   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:38.515233   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:38.515233   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:38.566945   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:38.566945   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:41.110695   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:41.136078   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:41.178673   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:41.184438   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:41.215134   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:41.219276   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:41.260112   13768 logs.go:282] 0 containers: []
	W1205 07:45:41.260112   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:41.263102   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:41.294098   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:41.299031   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:41.329197   13768 logs.go:282] 0 containers: []
	W1205 07:45:41.329243   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:41.333346   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:41.365419   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:41.372313   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:41.406788   13768 logs.go:282] 0 containers: []
	W1205 07:45:41.406788   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:41.411815   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:41.442948   13768 logs.go:282] 0 containers: []
	W1205 07:45:41.442948   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:41.442948   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:41.442948   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:41.510370   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:41.510370   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:41.550615   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:41.550689   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:41.602177   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:41.602177   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:41.726726   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:41.726817   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:41.726817   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:41.774106   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:41.774106   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:41.818985   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:41.818985   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:41.862386   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:41.862471   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:41.892941   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:41.892941   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:44.457782   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.481517   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:44.517264   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:44.520938   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:44.550661   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:44.554530   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:44.583353   13768 logs.go:282] 0 containers: []
	W1205 07:45:44.583353   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:44.588799   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:44.619027   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:44.622027   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:44.656425   13768 logs.go:282] 0 containers: []
	W1205 07:45:44.656425   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:44.662363   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:44.692648   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:44.695639   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:44.726879   13768 logs.go:282] 0 containers: []
	W1205 07:45:44.726879   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:44.730144   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:44.761697   13768 logs.go:282] 0 containers: []
	W1205 07:45:44.761697   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:44.761697   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:44.761697   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:44.828291   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:44.828291   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:44.870259   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:44.870259   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:44.927275   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:44.927275   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:44.962282   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:44.962282   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:45.059828   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:45.059828   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:45.059828   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:45.106758   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:45.106758   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:45.149528   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:45.149528   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:45.189261   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:45.189261   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:47.747044   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.772240   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:47.805240   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:47.808236   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:47.840247   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:47.844248   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:47.878112   13768 logs.go:282] 0 containers: []
	W1205 07:45:47.878112   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:47.883930   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:47.924213   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:47.929227   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:47.965293   13768 logs.go:282] 0 containers: []
	W1205 07:45:47.965293   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:47.970283   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:48.006288   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:48.009282   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:48.046290   13768 logs.go:282] 0 containers: []
	W1205 07:45:48.046290   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:48.051293   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:48.087290   13768 logs.go:282] 0 containers: []
	W1205 07:45:48.087290   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:48.087290   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:48.087290   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:48.146287   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:48.146287   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:48.187316   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:48.187316   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:48.242897   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:48.242897   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:48.277909   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:48.277909   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:48.350814   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:48.350814   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:48.452529   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:48.452726   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:48.452726   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:48.501358   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:48.501358   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:48.550798   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:48.550798   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:51.089829   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.115153   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:51.160652   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:51.164652   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:51.264712   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:51.268222   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:51.297406   13768 logs.go:282] 0 containers: []
	W1205 07:45:51.297406   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:51.301414   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:51.335438   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:51.339426   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:51.373416   13768 logs.go:282] 0 containers: []
	W1205 07:45:51.373416   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:51.376408   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:51.407983   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:51.411984   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:51.441215   13768 logs.go:282] 0 containers: []
	W1205 07:45:51.441215   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:51.445656   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:51.479145   13768 logs.go:282] 0 containers: []
	W1205 07:45:51.479145   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:51.479145   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:51.479145   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:51.630498   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:51.630498   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:51.685513   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:51.685513   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:51.750500   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:51.750500   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:51.788493   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:51.788493   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:51.900472   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:51.900472   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:51.962366   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:51.962366   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:52.062300   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:52.062300   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:52.062300   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:52.108216   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:52.108216   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:54.663454   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.684329   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:54.714330   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:54.718323   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:54.752322   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:54.755318   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:54.785339   13768 logs.go:282] 0 containers: []
	W1205 07:45:54.785339   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:54.789349   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:54.821564   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:54.825548   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:54.857168   13768 logs.go:282] 0 containers: []
	W1205 07:45:54.857168   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:54.863061   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:54.893088   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:54.897085   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:54.929677   13768 logs.go:282] 0 containers: []
	W1205 07:45:54.929677   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:54.932685   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:54.965604   13768 logs.go:282] 0 containers: []
	W1205 07:45:54.965604   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:54.965604   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:54.965604   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:54.997727   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:54.997727   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:55.068697   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:55.068697   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:55.105692   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:55.105692   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:55.223280   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:55.223357   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:55.223399   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:45:55.271741   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:55.271741   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:55.310862   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:55.311856   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:55.364355   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:55.364355   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:55.407759   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:55.407874   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:57.987197   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.010120   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:45:58.043879   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:45:58.047623   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:45:58.081175   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:45:58.084876   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:45:58.119753   13768 logs.go:282] 0 containers: []
	W1205 07:45:58.119753   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:45:58.122759   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:45:58.157684   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:45:58.161696   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:45:58.219390   13768 logs.go:282] 0 containers: []
	W1205 07:45:58.219390   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:45:58.222383   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:45:58.254319   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:45:58.258439   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:45:58.293516   13768 logs.go:282] 0 containers: []
	W1205 07:45:58.293516   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:45:58.296512   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:45:58.329722   13768 logs.go:282] 0 containers: []
	W1205 07:45:58.329778   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:45:58.329778   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:45:58.329778   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:45:58.369052   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:45:58.369052   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:45:58.406090   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:45:58.406090   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:45:58.457675   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:45:58.457675   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:45:58.522469   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:45:58.522469   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:45:58.607503   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:45:58.607554   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:45:58.607554   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:45:58.656828   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:45:58.656856   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:45:58.684773   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:45:58.684773   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:45:58.725637   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:45:58.725637   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:01.278748   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.303648   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:01.334092   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:01.337897   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:01.369311   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:01.374433   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:01.407635   13768 logs.go:282] 0 containers: []
	W1205 07:46:01.407635   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:01.411362   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:01.442772   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:01.445766   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:01.479345   13768 logs.go:282] 0 containers: []
	W1205 07:46:01.479345   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:01.482337   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:01.517037   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:01.521597   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:01.555732   13768 logs.go:282] 0 containers: []
	W1205 07:46:01.555787   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:01.560360   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:01.591376   13768 logs.go:282] 0 containers: []
	W1205 07:46:01.591376   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:01.591376   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:01.591376   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:01.631606   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:01.631665   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:01.709916   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:01.709916   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:01.751826   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:01.751826   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:01.799250   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:01.799250   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:01.831763   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:01.831763   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:01.897768   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:01.897768   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:02.005760   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:02.005760   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:02.005760   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:02.055758   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:02.055758   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:04.608244   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:04.640129   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:04.679282   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:04.686490   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:04.720758   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:04.726802   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:04.761164   13768 logs.go:282] 0 containers: []
	W1205 07:46:04.761164   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:04.765396   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:04.797867   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:04.801213   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:04.830327   13768 logs.go:282] 0 containers: []
	W1205 07:46:04.830327   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:04.833329   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:04.865943   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:04.871110   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:04.902084   13768 logs.go:282] 0 containers: []
	W1205 07:46:04.902084   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:04.905920   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:04.934404   13768 logs.go:282] 0 containers: []
	W1205 07:46:04.934404   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:04.934404   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:04.934404   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:04.972337   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:04.972337   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:05.060087   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:05.060087   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:05.060087   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:05.110364   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:05.110364   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:05.158292   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:05.158312   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:05.202941   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:05.202976   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:05.243283   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:05.243355   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:05.294624   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:05.294624   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:05.371785   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:05.371785   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:07.904800   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.931065   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:07.966426   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:07.971428   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:08.002648   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:08.006851   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:08.037750   13768 logs.go:282] 0 containers: []
	W1205 07:46:08.037750   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:08.041228   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:08.077788   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:08.081442   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:08.108574   13768 logs.go:282] 0 containers: []
	W1205 07:46:08.108656   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:08.111893   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:08.146780   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:08.150487   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:08.179625   13768 logs.go:282] 0 containers: []
	W1205 07:46:08.179625   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:08.183757   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:08.216262   13768 logs.go:282] 0 containers: []
	W1205 07:46:08.216286   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:08.216286   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:08.216286   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:08.253583   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:08.253583   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:08.296622   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:08.296663   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:08.330347   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:08.330441   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:08.383879   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:08.383879   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:08.472791   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:08.472791   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:08.472791   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:08.517994   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:08.517994   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:08.562637   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:08.562637   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:08.591716   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:08.591716   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:11.161982   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.191296   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:11.231272   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:11.236690   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:11.295217   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:11.301202   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:11.345372   13768 logs.go:282] 0 containers: []
	W1205 07:46:11.345372   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:11.349372   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:11.379373   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:11.382372   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:11.412372   13768 logs.go:282] 0 containers: []
	W1205 07:46:11.412372   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:11.416390   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:11.444399   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:11.447372   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:11.477379   13768 logs.go:282] 0 containers: []
	W1205 07:46:11.478377   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:11.481375   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:11.510374   13768 logs.go:282] 0 containers: []
	W1205 07:46:11.510374   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:11.510374   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:11.510374   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:11.555275   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:11.555275   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:11.659677   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:11.659677   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:11.659677   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:11.745840   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:11.745840   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:11.773838   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:11.773838   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:12.125570   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:12.125570   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:12.189880   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:12.189880   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:12.255529   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:12.256050   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:12.308112   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:12.308641   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:14.850836   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.879166   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:14.912958   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:14.917281   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:14.949603   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:14.953724   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:14.985483   13768 logs.go:282] 0 containers: []
	W1205 07:46:14.985483   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:14.989034   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:15.031389   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:15.034388   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:15.068240   13768 logs.go:282] 0 containers: []
	W1205 07:46:15.068240   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:15.073926   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:15.112718   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:15.119601   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:15.161733   13768 logs.go:282] 0 containers: []
	W1205 07:46:15.161733   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:15.169371   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:15.206242   13768 logs.go:282] 0 containers: []
	W1205 07:46:15.206242   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:15.206242   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:15.206242   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:15.268242   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:15.268242   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:15.310371   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:15.310371   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:15.361949   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:15.361949   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:15.409621   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:15.409621   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:15.458675   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:15.458675   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:15.549679   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:15.549679   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:15.549679   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:15.592681   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:15.592681   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:15.631635   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:15.631681   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:18.247670   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.271788   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:18.301994   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:18.305958   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:18.336722   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:18.340302   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:18.370424   13768 logs.go:282] 0 containers: []
	W1205 07:46:18.370424   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:18.374822   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:18.413819   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:18.417822   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:18.451038   13768 logs.go:282] 0 containers: []
	W1205 07:46:18.451038   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:18.455896   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:18.485614   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:18.490335   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:18.520752   13768 logs.go:282] 0 containers: []
	W1205 07:46:18.520752   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:18.524898   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:18.556271   13768 logs.go:282] 0 containers: []
	W1205 07:46:18.556367   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:18.556409   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:18.556409   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:18.604094   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:18.604094   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:18.644964   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:18.644964   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:18.679650   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:18.679650   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:18.709197   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:18.709197   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:18.762619   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:18.762705   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:18.831902   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:18.831902   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:18.875176   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:18.875176   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:18.960460   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:18.960460   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:18.960460   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:21.518382   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.538944   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:21.570942   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:21.573932   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:21.604946   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:21.607941   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:21.637955   13768 logs.go:282] 0 containers: []
	W1205 07:46:21.638936   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:21.641936   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:21.674604   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:21.678322   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:21.706869   13768 logs.go:282] 0 containers: []
	W1205 07:46:21.706869   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:21.710814   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:21.743193   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:21.746188   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:21.776208   13768 logs.go:282] 0 containers: []
	W1205 07:46:21.776208   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:21.780191   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:21.814192   13768 logs.go:282] 0 containers: []
	W1205 07:46:21.814192   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:21.814192   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:21.814192   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:21.853197   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:21.853197   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:21.893190   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:21.893190   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:21.920191   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:21.920191   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:21.983200   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:21.983200   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:22.066429   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:22.066429   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:22.066429   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:22.117426   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:22.117426   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:22.161423   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:22.161423   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:22.195985   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:22.195985   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:24.759761   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.781968   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:24.816389   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:24.820001   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:24.850188   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:24.854095   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:24.885544   13768 logs.go:282] 0 containers: []
	W1205 07:46:24.885608   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:24.889489   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:24.920602   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:24.924081   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:24.953952   13768 logs.go:282] 0 containers: []
	W1205 07:46:24.954007   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:24.957544   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:24.990989   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:24.994481   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:25.023311   13768 logs.go:282] 0 containers: []
	W1205 07:46:25.023364   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:25.027894   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:25.055936   13768 logs.go:282] 0 containers: []
	W1205 07:46:25.055936   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:25.055936   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:25.055936   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:25.099617   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:25.099617   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:25.128802   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:25.128802   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:25.175804   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:25.175804   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:25.238227   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:25.238227   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:25.277460   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:25.277460   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:25.323396   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:25.323396   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:25.360200   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:25.360272   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:25.442791   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:25.442791   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:25.442791   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:27.996009   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.017752   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:28.055244   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:28.059245   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:28.086235   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:28.089228   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:28.117959   13768 logs.go:282] 0 containers: []
	W1205 07:46:28.117959   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:28.122942   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:28.158826   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:28.163008   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:28.195403   13768 logs.go:282] 0 containers: []
	W1205 07:46:28.195403   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:28.201845   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:28.241933   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:28.245937   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:28.275108   13768 logs.go:282] 0 containers: []
	W1205 07:46:28.275108   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:28.278094   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:28.307159   13768 logs.go:282] 0 containers: []
	W1205 07:46:28.307159   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:28.307159   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:28.307159   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:28.361153   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:28.361214   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:28.413591   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:28.413591   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:28.477599   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:28.477599   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:28.552052   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:28.552052   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:28.590069   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:28.590069   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:28.619071   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:28.619071   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:28.720649   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:28.720649   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:28.720649   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:28.773210   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:28.773210   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:31.323958   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.354608   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:31.400135   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:31.404151   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:31.440311   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:31.444804   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:31.475342   13768 logs.go:282] 0 containers: []
	W1205 07:46:31.475342   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:31.479335   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:31.509338   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:31.512343   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:31.547845   13768 logs.go:282] 0 containers: []
	W1205 07:46:31.548831   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:31.552833   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:31.591199   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:31.597674   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:31.637976   13768 logs.go:282] 0 containers: []
	W1205 07:46:31.637976   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:31.641967   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:31.671977   13768 logs.go:282] 0 containers: []
	W1205 07:46:31.671977   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:31.671977   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:31.671977   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:31.710821   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:31.710821   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:31.764988   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:31.764988   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:31.835617   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:31.835617   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:31.942139   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:31.942139   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:31.942139   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:31.990481   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:31.990481   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:32.038782   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:32.038782   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:32.065781   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:32.065781   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:32.103157   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:32.103157   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:34.660859   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:34.686063   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:34.717411   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:34.722246   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:34.760369   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:34.764678   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:34.793606   13768 logs.go:282] 0 containers: []
	W1205 07:46:34.793651   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:34.798192   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:34.828241   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:34.831608   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:34.862308   13768 logs.go:282] 0 containers: []
	W1205 07:46:34.862308   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:34.865297   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:34.897297   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:34.901311   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:34.939818   13768 logs.go:282] 0 containers: []
	W1205 07:46:34.939818   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:34.945259   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:34.982061   13768 logs.go:282] 0 containers: []
	W1205 07:46:34.982061   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:34.982061   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:34.982061   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:35.044049   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:35.044049   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:35.079051   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:35.079051   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:35.128680   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:35.128680   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:35.182675   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:35.182675   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:35.226670   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:35.226670   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:35.254662   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:35.254662   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:35.338858   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:35.338922   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:35.338957   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:35.375917   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:35.375917   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:37.940431   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:37.970223   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:38.005784   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:38.009108   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:38.045298   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:38.049293   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:38.082772   13768 logs.go:282] 0 containers: []
	W1205 07:46:38.082772   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:38.086682   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:38.120651   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:38.123663   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:38.153962   13768 logs.go:282] 0 containers: []
	W1205 07:46:38.153962   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:38.159248   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:38.194514   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:38.197506   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:38.233367   13768 logs.go:282] 0 containers: []
	W1205 07:46:38.233367   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:38.236372   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:38.266498   13768 logs.go:282] 0 containers: []
	W1205 07:46:38.266498   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:38.266498   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:38.266498   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:38.305097   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:38.305097   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:38.392539   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:38.392612   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:38.392612   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:38.443111   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:38.443111   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:38.479697   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:38.479697   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:38.530202   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:38.530202   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.605565   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:38.606571   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:38.663150   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:38.663150   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:38.705148   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:38.705148   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:41.249469   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:41.275585   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:41.305286   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:41.309282   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:41.342352   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:41.346347   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:41.375338   13768 logs.go:282] 0 containers: []
	W1205 07:46:41.375338   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:41.378337   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:41.407919   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:41.411400   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:41.447819   13768 logs.go:282] 0 containers: []
	W1205 07:46:41.447819   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:41.451819   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:41.485146   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:41.489901   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:41.518807   13768 logs.go:282] 0 containers: []
	W1205 07:46:41.518807   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:41.522820   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:41.560724   13768 logs.go:282] 0 containers: []
	W1205 07:46:41.560724   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:41.560724   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:41.560724   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:41.590987   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:41.590987   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:41.680179   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:41.680179   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:41.680179   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:41.739836   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:41.739836   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:41.803037   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:41.803150   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:41.873070   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:41.873070   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:41.912811   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:41.912811   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:41.959467   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:41.959467   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:42.001456   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:42.001456   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:44.542371   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:44.567394   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:44.604937   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:44.609901   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:44.650994   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:44.654992   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:44.690223   13768 logs.go:282] 0 containers: []
	W1205 07:46:44.690223   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:44.694222   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:44.728804   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:44.731800   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:44.761374   13768 logs.go:282] 0 containers: []
	W1205 07:46:44.761374   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:44.764364   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:44.795367   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:44.798364   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:44.827937   13768 logs.go:282] 0 containers: []
	W1205 07:46:44.828009   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:44.831387   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:44.864562   13768 logs.go:282] 0 containers: []
	W1205 07:46:44.864562   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:44.864562   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:44.864562   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.930570   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:44.930570   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:44.966672   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:44.966672   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:45.011105   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:45.011105   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:45.055114   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:45.055114   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:45.153441   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:45.153504   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:45.153504   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:45.197090   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:45.198072   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:45.238838   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:45.238922   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:45.267589   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:45.267589   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:47.828260   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:47.850964   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:47.885859   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:47.889655   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:47.926719   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:47.930289   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:47.963273   13768 logs.go:282] 0 containers: []
	W1205 07:46:47.963273   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:47.967279   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:47.994276   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:47.998273   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:48.030414   13768 logs.go:282] 0 containers: []
	W1205 07:46:48.030624   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:48.036836   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:48.079397   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:48.083854   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:48.110680   13768 logs.go:282] 0 containers: []
	W1205 07:46:48.110680   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:48.118140   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:48.153801   13768 logs.go:282] 0 containers: []
	W1205 07:46:48.153801   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:48.153801   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:48.154804   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:48.242812   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:48.242812   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:48.242812   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:48.291791   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:48.291791   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:48.339797   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:48.339797   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:48.386799   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:48.386799   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:48.416802   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:48.416802   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:48.476805   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:48.476805   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:48.544807   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:48.544807   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:48.588815   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:48.588815   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:51.140613   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:51.164604   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:51.200602   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:51.203595   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:51.234427   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:51.238407   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:51.267403   13768 logs.go:282] 0 containers: []
	W1205 07:46:51.267403   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:51.271412   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:51.302420   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:51.306406   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:51.338601   13768 logs.go:282] 0 containers: []
	W1205 07:46:51.338601   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:51.343414   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:51.378157   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:51.383165   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:51.423776   13768 logs.go:282] 0 containers: []
	W1205 07:46:51.423776   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:51.428265   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:51.459931   13768 logs.go:282] 0 containers: []
	W1205 07:46:51.459931   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:51.459931   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:51.459931   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:51.526343   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:51.526343   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:51.569379   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:51.569379   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:51.672369   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:51.672369   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:51.672369   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:51.721938   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:51.721938   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:51.763132   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:51.763132   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:51.806826   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:51.806826   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:51.844402   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:51.844402   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:51.902820   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:51.902820   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:54.532369   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:54.555233   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:54.603753   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:54.608749   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:54.646780   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:54.651756   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:54.681751   13768 logs.go:282] 0 containers: []
	W1205 07:46:54.681751   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:54.685754   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:54.714762   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:54.717748   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:54.746771   13768 logs.go:282] 0 containers: []
	W1205 07:46:54.746771   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:54.750749   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:54.793110   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:54.796496   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:54.829708   13768 logs.go:282] 0 containers: []
	W1205 07:46:54.829708   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:54.833698   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:54.861634   13768 logs.go:282] 0 containers: []
	W1205 07:46:54.862201   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:54.862230   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:54.862230   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:54.915746   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:54.915746   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:55.015748   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:55.015748   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:55.015748   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:55.068747   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:55.068747   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:55.113751   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:55.113751   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:55.159206   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:55.159206   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:46:55.188193   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:55.188193   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:55.245224   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:55.245224   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:55.310429   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:55.310429   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:57.859047   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:57.879840   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:46:57.911419   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:46:57.915755   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:46:57.949643   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:46:57.954143   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:46:57.984986   13768 logs.go:282] 0 containers: []
	W1205 07:46:57.984986   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:46:57.989571   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:46:58.025342   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:46:58.029725   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:46:58.066826   13768 logs.go:282] 0 containers: []
	W1205 07:46:58.066826   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:58.072505   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:46:58.117387   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:46:58.121454   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:46:58.157045   13768 logs.go:282] 0 containers: []
	W1205 07:46:58.157045   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:58.160046   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:46:58.220830   13768 logs.go:282] 0 containers: []
	W1205 07:46:58.220830   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:58.220830   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:58.220830   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:58.266736   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:58.266736   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:58.379793   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:58.379859   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:46:58.379899   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:46:58.432332   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:46:58.432332   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:46:58.479949   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:46:58.479949   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:46:58.534499   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:46:58.534499   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:58.601524   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:58.601628   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:58.687508   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:46:58.687508   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:46:58.725509   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:46:58.725509   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:47:01.268126   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:01.290193   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:47:01.324176   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:47:01.327661   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:47:01.357010   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:47:01.360433   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:47:01.388036   13768 logs.go:282] 0 containers: []
	W1205 07:47:01.388127   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:47:01.392656   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:47:01.427107   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:47:01.430994   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:47:01.467601   13768 logs.go:282] 0 containers: []
	W1205 07:47:01.467601   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:01.473024   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:47:01.516338   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:47:01.520311   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:47:01.553308   13768 logs.go:282] 0 containers: []
	W1205 07:47:01.553308   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:01.556309   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:47:01.588209   13768 logs.go:282] 0 containers: []
	W1205 07:47:01.588209   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:47:01.588209   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:01.588209   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:01.661850   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:01.661850   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:01.701406   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:47:01.701406   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:47:01.757963   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:47:01.757963   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:47:01.789883   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:47:01.789883   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:01.846373   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:01.846433   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:01.950833   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:01.950833   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:47:01.950833   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:47:02.000406   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:47:02.000406   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:47:02.050354   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:47:02.051321   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:47:04.596633   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:04.620898   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:47:04.654633   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:47:04.661360   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:47:04.703081   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:47:04.707548   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:47:04.735710   13768 logs.go:282] 0 containers: []
	W1205 07:47:04.735710   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:47:04.739688   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:47:04.776582   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:47:04.780254   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:47:04.810855   13768 logs.go:282] 0 containers: []
	W1205 07:47:04.810918   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:04.814788   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:47:04.852042   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:47:04.856481   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:47:04.885387   13768 logs.go:282] 0 containers: []
	W1205 07:47:04.885387   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:04.888834   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:47:04.923002   13768 logs.go:282] 0 containers: []
	W1205 07:47:04.923002   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:47:04.923002   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:04.923002   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:04.986575   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:04.986575   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:05.025077   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:47:05.025077   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:47:05.071841   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:47:05.071841   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:47:05.116333   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:47:05.116333   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:05.171334   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:05.171334   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:05.264123   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:05.264123   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:47:05.264123   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:47:05.308635   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:47:05.308635   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:47:05.347994   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:47:05.347994   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:47:07.881529   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:07.908882   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:47:07.944656   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:47:07.948426   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:47:07.978160   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:47:07.981633   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:47:08.011724   13768 logs.go:282] 0 containers: []
	W1205 07:47:08.011724   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:47:08.017227   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:47:08.051669   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:47:08.055464   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:47:08.085936   13768 logs.go:282] 0 containers: []
	W1205 07:47:08.085936   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:08.089867   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:47:08.120490   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:47:08.126177   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:47:08.158436   13768 logs.go:282] 0 containers: []
	W1205 07:47:08.158436   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:08.162598   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:47:08.193104   13768 logs.go:282] 0 containers: []
	W1205 07:47:08.193197   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:47:08.193197   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:47:08.193245   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:47:08.236865   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:08.236865   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:08.304871   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:08.305879   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:08.344134   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:08.345133   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:08.535748   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:08.535748   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:47:08.535748   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:47:08.572962   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:47:08.572962   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:47:08.600180   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:47:08.600714   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:08.718023   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:47:08.718023   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:47:08.763747   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:47:08.763747   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:47:11.316298   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:11.341178   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:47:11.377461   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:47:11.380465   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:47:11.418969   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:47:11.422970   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:47:11.452113   13768 logs.go:282] 0 containers: []
	W1205 07:47:11.452113   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:47:11.457161   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:47:11.490803   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:47:11.493803   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:47:11.521908   13768 logs.go:282] 0 containers: []
	W1205 07:47:11.521908   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:11.525383   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:47:11.554612   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:47:11.558863   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:47:11.588593   13768 logs.go:282] 0 containers: []
	W1205 07:47:11.588593   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:11.592883   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:47:11.624123   13768 logs.go:282] 0 containers: []
	W1205 07:47:11.624163   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:47:11.624200   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:11.624200   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:11.797877   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:11.797922   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:47:11.797953   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:47:11.855981   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:47:11.855981   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:47:11.910103   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:47:11.910103   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:47:11.960110   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:47:11.960110   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:47:11.989099   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:47:11.989099   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:12.047099   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:12.047099   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:12.130109   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:12.130109   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:12.172100   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:47:12.172100   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:47:14.715643   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:14.832729   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 07:47:14.960588   13768 logs.go:282] 1 containers: [4415be792939]
	I1205 07:47:14.965580   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 07:47:15.102948   13768 logs.go:282] 1 containers: [e22877b7e2b7]
	I1205 07:47:15.111145   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 07:47:15.203043   13768 logs.go:282] 0 containers: []
	W1205 07:47:15.203043   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:47:15.215262   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 07:47:15.323943   13768 logs.go:282] 1 containers: [bb0568ac426f]
	I1205 07:47:15.329005   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 07:47:15.414507   13768 logs.go:282] 0 containers: []
	W1205 07:47:15.414507   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:15.420225   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 07:47:15.490457   13768 logs.go:282] 1 containers: [f3c5649a2875]
	I1205 07:47:15.498462   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 07:47:15.550879   13768 logs.go:282] 0 containers: []
	W1205 07:47:15.550879   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:15.557909   13768 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 07:47:15.618533   13768 logs.go:282] 0 containers: []
	W1205 07:47:15.618533   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:47:15.618533   13768 logs.go:123] Gathering logs for kube-apiserver [4415be792939] ...
	I1205 07:47:15.618533   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4415be792939"
	I1205 07:47:15.702899   13768 logs.go:123] Gathering logs for etcd [e22877b7e2b7] ...
	I1205 07:47:15.702960   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e22877b7e2b7"
	I1205 07:47:15.789769   13768 logs.go:123] Gathering logs for kube-controller-manager [f3c5649a2875] ...
	I1205 07:47:15.789769   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3c5649a2875"
	I1205 07:47:15.863711   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:47:15.863711   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:47:15.911583   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:47:15.911583   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:16.007995   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:16.007995   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:16.125555   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:16.125555   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:16.280139   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:16.280139   13768 logs.go:123] Gathering logs for kube-scheduler [bb0568ac426f] ...
	I1205 07:47:16.280139   13768 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0568ac426f"
	I1205 07:47:16.358793   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:16.359686   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:18.927521   13768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:18.944352   13768 kubeadm.go:602] duration metric: took 4m2.0896178s to restartPrimaryControlPlane
	W1205 07:47:18.944352   13768 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 07:47:18.948346   13768 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 07:47:19.657399   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:47:19.683059   13768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:47:19.696818   13768 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:47:19.701478   13768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:47:19.719159   13768 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:47:19.719159   13768 kubeadm.go:158] found existing configuration files:
	
	I1205 07:47:19.724697   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:47:19.741443   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:47:19.747528   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:47:19.772537   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:47:19.785899   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:47:19.789888   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:47:19.807894   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:47:19.825682   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:47:19.830685   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:47:19.856688   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:47:19.871682   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:47:19.876687   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:47:19.894681   13768 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:47:20.020527   13768 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 07:47:20.115505   13768 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:47:20.246200   13768 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:51:21.461576   13768 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:51:21.461576   13768 kubeadm.go:319] 
	I1205 07:51:21.462649   13768 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:51:21.465812   13768 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:51:21.466044   13768 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:51:21.466044   13768 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:51:21.466044   13768 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:51:21.466044   13768 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:51:21.466044   13768 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:51:21.466652   13768 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:51:21.466793   13768 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:51:21.466933   13768 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:51:21.467129   13768 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:51:21.467129   13768 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:51:21.467129   13768 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:51:21.467129   13768 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:51:21.467129   13768 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:51:21.467767   13768 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:51:21.468014   13768 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:51:21.468111   13768 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:51:21.468227   13768 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:51:21.468358   13768 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:51:21.468539   13768 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:51:21.468676   13768 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:51:21.468797   13768 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:51:21.469070   13768 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:51:21.469350   13768 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:51:21.469597   13768 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:51:21.469954   13768 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:51:21.470061   13768 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:51:21.470314   13768 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:51:21.470490   13768 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:51:21.470532   13768 kubeadm.go:319] OS: Linux
	I1205 07:51:21.470705   13768 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:51:21.470705   13768 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:51:21.470705   13768 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:51:21.470705   13768 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:51:21.470705   13768 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:51:21.471249   13768 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:51:21.471455   13768 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:51:21.471713   13768 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:51:21.471787   13768 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:51:21.471787   13768 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:51:21.471787   13768 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:51:21.472313   13768 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:51:21.472625   13768 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:51:21.475097   13768 out.go:252]   - Generating certificates and keys ...
	I1205 07:51:21.475097   13768 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:51:21.475097   13768 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:51:21.475097   13768 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:51:21.475097   13768 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:51:21.476054   13768 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:51:21.476054   13768 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:51:21.476054   13768 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:51:21.476054   13768 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:51:21.476706   13768 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:51:21.476837   13768 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:51:21.476837   13768 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:51:21.476837   13768 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:51:21.476837   13768 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:51:21.476837   13768 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:51:21.477373   13768 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:51:21.477555   13768 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:51:21.477555   13768 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:51:21.477790   13768 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:51:21.477909   13768 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:51:21.479113   13768 out.go:252]   - Booting up control plane ...
	I1205 07:51:21.480117   13768 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:51:21.480117   13768 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:51:21.480117   13768 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:51:21.480117   13768 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:51:21.480966   13768 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:51:21.481145   13768 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:51:21.481275   13768 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:51:21.481413   13768 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:51:21.481471   13768 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:51:21.481471   13768 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:51:21.482106   13768 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001108507s
	I1205 07:51:21.482148   13768 kubeadm.go:319] 
	I1205 07:51:21.482330   13768 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:51:21.482464   13768 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:51:21.482742   13768 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:51:21.482821   13768 kubeadm.go:319] 
	I1205 07:51:21.483063   13768 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:51:21.483274   13768 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:51:21.483347   13768 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:51:21.483394   13768 kubeadm.go:319] 
	W1205 07:51:21.483599   13768 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001108507s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001108507s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:51:21.488031   13768 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 07:51:21.955722   13768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:51:21.975249   13768 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:51:21.980368   13768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:51:21.993507   13768 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:51:21.993507   13768 kubeadm.go:158] found existing configuration files:
	
	I1205 07:51:21.996506   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:51:22.012437   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:51:22.017745   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:51:22.035575   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:51:22.049667   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:51:22.053633   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:51:22.070167   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:51:22.083006   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:51:22.087233   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:51:22.102556   13768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:51:22.118271   13768 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:51:22.122692   13768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:51:22.141517   13768 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:51:22.258337   13768 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 07:51:22.341144   13768 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:51:22.442325   13768 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:55:23.045593   13768 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:55:23.045652   13768 kubeadm.go:319] 
	I1205 07:55:23.045992   13768 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:55:23.051845   13768 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:55:23.051845   13768 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:55:23.052506   13768 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:55:23.052692   13768 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:55:23.052854   13768 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:55:23.053029   13768 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:55:23.053160   13768 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:55:23.053328   13768 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:55:23.053484   13768 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:55:23.053685   13768 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:55:23.053781   13768 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:55:23.053906   13768 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:55:23.054048   13768 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:55:23.054244   13768 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:55:23.054403   13768 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:55:23.054504   13768 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:55:23.054667   13768 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:55:23.054801   13768 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:55:23.054856   13768 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:55:23.055042   13768 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:55:23.055090   13768 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:55:23.055090   13768 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:55:23.055090   13768 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:55:23.055090   13768 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:55:23.055090   13768 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:55:23.055619   13768 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:55:23.055741   13768 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:55:23.055782   13768 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:55:23.056077   13768 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:55:23.056256   13768 kubeadm.go:319] OS: Linux
	I1205 07:55:23.056374   13768 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:55:23.056515   13768 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:55:23.056707   13768 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:55:23.056862   13768 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:55:23.057105   13768 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:55:23.057333   13768 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:55:23.057453   13768 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:55:23.057504   13768 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:55:23.057504   13768 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:55:23.057504   13768 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:55:23.057504   13768 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:55:23.058089   13768 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:55:23.058089   13768 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:55:23.061076   13768 out.go:252]   - Generating certificates and keys ...
	I1205 07:55:23.061076   13768 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:55:23.062013   13768 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:55:23.063056   13768 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:55:23.063056   13768 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:55:23.063056   13768 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:55:23.063056   13768 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:55:23.063056   13768 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:55:23.063056   13768 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:55:23.063626   13768 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:55:23.063626   13768 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:55:23.066473   13768 out.go:252]   - Booting up control plane ...
	I1205 07:55:23.066473   13768 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:55:23.066473   13768 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:55:23.066473   13768 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:55:23.066473   13768 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:55:23.066473   13768 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:55:23.067476   13768 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:55:23.067476   13768 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:55:23.067476   13768 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:55:23.067476   13768 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:55:23.067476   13768 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:55:23.067476   13768 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000125968s
	I1205 07:55:23.067476   13768 kubeadm.go:319] 
	I1205 07:55:23.068475   13768 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:55:23.068475   13768 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:55:23.068475   13768 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:55:23.068475   13768 kubeadm.go:319] 
	I1205 07:55:23.068475   13768 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:55:23.068475   13768 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:55:23.068475   13768 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:55:23.068475   13768 kubeadm.go:319] 
	I1205 07:55:23.068475   13768 kubeadm.go:403] duration metric: took 12m6.261214s to StartCluster
	I1205 07:55:23.068475   13768 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:55:23.072573   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:55:23.139073   13768 cri.go:89] found id: ""
	I1205 07:55:23.139139   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.139139   13768 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:55:23.139168   13768 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:55:23.143489   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:55:23.709219   13768 cri.go:89] found id: ""
	I1205 07:55:23.709219   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.709219   13768 logs.go:284] No container was found matching "etcd"
	I1205 07:55:23.709219   13768 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:55:23.714297   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:55:23.758890   13768 cri.go:89] found id: ""
	I1205 07:55:23.758890   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.758890   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:55:23.758890   13768 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:55:23.763536   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:55:23.816977   13768 cri.go:89] found id: ""
	I1205 07:55:23.816977   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.816977   13768 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:55:23.816977   13768 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:55:23.822697   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:55:23.875264   13768 cri.go:89] found id: ""
	I1205 07:55:23.875264   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.875264   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:55:23.875264   13768 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:55:23.879252   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:55:23.930522   13768 cri.go:89] found id: ""
	I1205 07:55:23.930522   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.930522   13768 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:55:23.930522   13768 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:55:23.935436   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:55:23.978429   13768 cri.go:89] found id: ""
	I1205 07:55:23.978497   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.978497   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:55:23.978497   13768 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:55:23.983310   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:55:24.038016   13768 cri.go:89] found id: ""
	I1205 07:55:24.038016   13768 logs.go:282] 0 containers: []
	W1205 07:55:24.038016   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:55:24.038016   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:55:24.038016   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:55:24.106470   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:55:24.106470   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:55:24.175433   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:55:24.175433   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:55:24.212237   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:55:24.212237   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:55:24.310545   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:55:24.310545   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:55:24.310545   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1205 07:55:24.342555   13768 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:55:24.342555   13768 out.go:285] * 
	* 
	W1205 07:55:24.342555   13768 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:24.343564   13768 out.go:285] * 
	* 
	W1205 07:55:24.347550   13768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:55:24.358643   13768 out.go:203] 
	W1205 07:55:24.363464   13768 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:24.363464   13768 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:55:24.363464   13768 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:55:24.367464   13768 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-863300 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-863300 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-863300 version --output=json: exit status 1 (10.1596649s)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "34",
	    "gitVersion": "v1.34.2",
	    "gitCommit": "8cc511e399b929453cd98ae65b419c3cc227ec79",
	    "gitTreeState": "clean",
	    "buildDate": "2025-11-11T19:10:16Z",
	    "goVersion": "go1.24.9",
	    "compiler": "gc",
	    "platform": "windows/amd64"
	  },
	  "kustomizeVersion": "v5.7.1"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-05 07:55:35.7751866 +0000 UTC m=+6615.191837901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-863300
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-863300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76308136c0c8abd86ae8b7fe344868857ee6ff49e2ab51111b42a5d8b15513ba",
	        "Created": "2025-12-05T07:41:50.116677828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:42:36.720800788Z",
	            "FinishedAt": "2025-12-05T07:42:32.144641101Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/76308136c0c8abd86ae8b7fe344868857ee6ff49e2ab51111b42a5d8b15513ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76308136c0c8abd86ae8b7fe344868857ee6ff49e2ab51111b42a5d8b15513ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/76308136c0c8abd86ae8b7fe344868857ee6ff49e2ab51111b42a5d8b15513ba/hosts",
	        "LogPath": "/var/lib/docker/containers/76308136c0c8abd86ae8b7fe344868857ee6ff49e2ab51111b42a5d8b15513ba/76308136c0c8abd86ae8b7fe344868857ee6ff49e2ab51111b42a5d8b15513ba-json.log",
	        "Name": "/kubernetes-upgrade-863300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-863300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-863300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/117c22ab6f8ab767cf55b3b7095632c6c562814ecb6969a8961a729a44ed41cc-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/117c22ab6f8ab767cf55b3b7095632c6c562814ecb6969a8961a729a44ed41cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/117c22ab6f8ab767cf55b3b7095632c6c562814ecb6969a8961a729a44ed41cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/117c22ab6f8ab767cf55b3b7095632c6c562814ecb6969a8961a729a44ed41cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-863300",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-863300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-863300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-863300",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-863300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8738bf56e7dbbf0295dfa0e25a9e8f645394067e1406d91d6f876fc20ed7da5",
	            "SandboxKey": "/var/run/docker/netns/a8738bf56e7d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60021"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60022"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60023"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60024"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60025"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-863300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "098ba74775639122205dbef0132fec0f64ec37f3094b30a9180a7a15f56f7d18",
	                    "EndpointID": "ede69a1899d53efe7c3b7785bd140e657203d1d7a49f8d507a15a866f9720edb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-863300",
	                        "76308136c0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-863300 -n kubernetes-upgrade-863300
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-863300 -n kubernetes-upgrade-863300: exit status 2 (606.2332ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-863300 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-863300 logs -n 25: (2.9621879s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-218000 sudo systemctl status kubelet --all --full --no-pager                                           │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat kubelet --no-pager                                                           │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/kubernetes/kubelet.conf                                                           │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /var/lib/kubelet/config.yaml                                                           │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status docker --all --full --no-pager                                            │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat docker --no-pager                                                            │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/docker/daemon.json                                                                │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo docker system info                                                                         │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status cri-docker --all --full --no-pager                                        │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat cri-docker --no-pager                                                        │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cri-dockerd --version                                                                      │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status containerd --all --full --no-pager                                        │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat containerd --no-pager                                                        │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/containerd/config.toml                                                            │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo containerd config dump                                                                     │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status crio --all --full --no-pager                                              │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	│ ssh     │ -p auto-218000 sudo systemctl cat crio --no-pager                                                              │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo crio config                                                                                │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ delete  │ -p auto-218000                                                                                                 │ auto-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ start   │ -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:55:24
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:55:24.061353    3768 out.go:360] Setting OutFile to fd 964 ...
	I1205 07:55:24.114455    3768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:24.114455    3768 out.go:374] Setting ErrFile to fd 1312...
	I1205 07:55:24.114455    3768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:24.129464    3768 out.go:368] Setting JSON to false
	I1205 07:55:24.133433    3768 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12581,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:55:24.133433    3768 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:55:24.154426    3768 out.go:179] * [kindnet-218000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:55:24.157428    3768 notify.go:221] Checking for updates...
	I1205 07:55:24.159435    3768 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:55:24.162432    3768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:55:24.165426    3768 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:55:24.167427    3768 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:55:24.170434    3768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:55:23.709219   13768 cri.go:89] found id: ""
	I1205 07:55:23.709219   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.709219   13768 logs.go:284] No container was found matching "etcd"
	I1205 07:55:23.709219   13768 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:55:23.714297   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:55:23.758890   13768 cri.go:89] found id: ""
	I1205 07:55:23.758890   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.758890   13768 logs.go:284] No container was found matching "coredns"
	I1205 07:55:23.758890   13768 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:55:23.763536   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:55:23.816977   13768 cri.go:89] found id: ""
	I1205 07:55:23.816977   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.816977   13768 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:55:23.816977   13768 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:55:23.822697   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:55:23.875264   13768 cri.go:89] found id: ""
	I1205 07:55:23.875264   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.875264   13768 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:55:23.875264   13768 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:55:23.879252   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:55:23.930522   13768 cri.go:89] found id: ""
	I1205 07:55:23.930522   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.930522   13768 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:55:23.930522   13768 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:55:23.935436   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:55:23.978429   13768 cri.go:89] found id: ""
	I1205 07:55:23.978497   13768 logs.go:282] 0 containers: []
	W1205 07:55:23.978497   13768 logs.go:284] No container was found matching "kindnet"
	I1205 07:55:23.978497   13768 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:55:23.983310   13768 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:55:24.038016   13768 cri.go:89] found id: ""
	I1205 07:55:24.038016   13768 logs.go:282] 0 containers: []
	W1205 07:55:24.038016   13768 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:55:24.038016   13768 logs.go:123] Gathering logs for container status ...
	I1205 07:55:24.038016   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:55:24.106470   13768 logs.go:123] Gathering logs for kubelet ...
	I1205 07:55:24.106470   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:55:24.175433   13768 logs.go:123] Gathering logs for dmesg ...
	I1205 07:55:24.175433   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:55:24.212237   13768 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:55:24.212237   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:55:24.310545   13768 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:55:24.310545   13768 logs.go:123] Gathering logs for Docker ...
	I1205 07:55:24.310545   13768 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1205 07:55:24.342555   13768 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:55:24.342555   13768 out.go:285] * 
	W1205 07:55:24.342555   13768 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:24.343564   13768 out.go:285] * 
	W1205 07:55:24.347550   13768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:55:24.358643   13768 out.go:203] 
	W1205 07:55:24.363464   13768 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000125968s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:24.363464   13768 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:55:24.363464   13768 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:55:24.367464   13768 out.go:203] 
	I1205 07:55:24.173427    3768 config.go:182] Loaded profile config "kubernetes-upgrade-863300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:24.173427    3768 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:24.173427    3768 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:24.173427    3768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:55:24.288541    3768 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:55:24.292553    3768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:24.540459    3768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:24.515545435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:24.543471    3768 out.go:179] * Using the docker driver based on user configuration
	I1205 07:55:24.546466    3768 start.go:309] selected driver: docker
	I1205 07:55:24.546466    3768 start.go:927] validating driver "docker" against <nil>
	I1205 07:55:24.546466    3768 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:55:24.647968    3768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:24.921272    3768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:24.900872209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:24.922257    3768 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:55:24.923251    3768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:55:24.926259    3768 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 07:55:24.928251    3768 cni.go:84] Creating CNI manager for "kindnet"
	I1205 07:55:24.928251    3768 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:55:24.928251    3768 start.go:353] cluster config:
	{Name:kindnet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:55:24.930252    3768 out.go:179] * Starting "kindnet-218000" primary control-plane node in "kindnet-218000" cluster
	I1205 07:55:24.933252    3768 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:55:24.937251    3768 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:55:24.939253    3768 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:24.939253    3768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:55:24.939253    3768 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1205 07:55:24.939253    3768 cache.go:65] Caching tarball of preloaded images
	I1205 07:55:24.940254    3768 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1205 07:55:24.940254    3768 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1205 07:55:24.940254    3768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\config.json ...
	I1205 07:55:24.940254    3768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\config.json: {Name:mk8c6eedb1805faf7f8e3f3c750ff310bca8b0a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:55:25.037685    3768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:55:25.038211    3768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:55:25.038278    3768 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:55:25.038381    3768 start.go:360] acquireMachinesLock for kindnet-218000: {Name:mkce281009023e090962188bc02766acad29cf7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:55:25.038381    3768 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-218000"
	I1205 07:55:25.038381    3768 start.go:93] Provisioning new machine with config: &{Name:kindnet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-218000 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:55:25.038381    3768 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:55:25.043261    3768 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:55:25.043455    3768 start.go:159] libmachine.API.Create for "kindnet-218000" (driver="docker")
	I1205 07:55:25.043455    3768 client.go:173] LocalClient.Create starting
	I1205 07:55:25.044037    3768 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 07:55:25.044037    3768 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:25.044037    3768 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:25.044037    3768 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 07:55:25.044037    3768 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:25.044566    3768 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:25.050217    3768 cli_runner.go:164] Run: docker network inspect kindnet-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:55:25.112781    3768 cli_runner.go:211] docker network inspect kindnet-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:55:25.115790    3768 network_create.go:284] running [docker network inspect kindnet-218000] to gather additional debugging logs...
	I1205 07:55:25.115790    3768 cli_runner.go:164] Run: docker network inspect kindnet-218000
	W1205 07:55:25.166781    3768 cli_runner.go:211] docker network inspect kindnet-218000 returned with exit code 1
	I1205 07:55:25.166781    3768 network_create.go:287] error running [docker network inspect kindnet-218000]: docker network inspect kindnet-218000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-218000 not found
	I1205 07:55:25.166781    3768 network_create.go:289] output of [docker network inspect kindnet-218000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-218000 not found
	
	** /stderr **
	I1205 07:55:25.169778    3768 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:55:25.251521    3768 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:25.282526    3768 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:25.297780    3768 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:25.313209    3768 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:25.326914    3768 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001920240}
	I1205 07:55:25.326914    3768 network_create.go:124] attempt to create docker network kindnet-218000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:55:25.332393    3768 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-218000 kindnet-218000
	W1205 07:55:25.396388    3768 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-218000 kindnet-218000 returned with exit code 1
	W1205 07:55:25.397109    3768 network_create.go:149] failed to create docker network kindnet-218000 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-218000 kindnet-218000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1205 07:55:25.397129    3768 network_create.go:116] failed to create docker network kindnet-218000 192.168.85.0/24, will retry: subnet is taken
	I1205 07:55:25.421078    3768 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:25.437026    3768 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017f9bc0}
	I1205 07:55:25.437026    3768 network_create.go:124] attempt to create docker network kindnet-218000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1205 07:55:25.440924    3768 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-218000 kindnet-218000
	I1205 07:55:25.625843    3768 network_create.go:108] docker network kindnet-218000 192.168.94.0/24 created
	I1205 07:55:25.625843    3768 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-218000" container
	I1205 07:55:25.636833    3768 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:55:25.710579    3768 cli_runner.go:164] Run: docker volume create kindnet-218000 --label name.minikube.sigs.k8s.io=kindnet-218000 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:55:25.764578    3768 oci.go:103] Successfully created a docker volume kindnet-218000
	I1205 07:55:25.768578    3768 cli_runner.go:164] Run: docker run --rm --name kindnet-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --entrypoint /usr/bin/test -v kindnet-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:55:27.165828    3768 cli_runner.go:217] Completed: docker run --rm --name kindnet-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --entrypoint /usr/bin/test -v kindnet-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.397228s)
	I1205 07:55:27.165828    3768 oci.go:107] Successfully prepared a docker volume kindnet-218000
	I1205 07:55:27.166373    3768 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:27.166373    3768 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:55:27.170250    3768 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> Docker <==
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.623481092Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.623603505Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.623624408Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.623634609Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.623648210Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.623685915Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.623732020Z" level=info msg="Initializing buildkit"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.780989275Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.792068219Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.792243539Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.792243939Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:42:51 kubernetes-upgrade-863300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:42:51 kubernetes-upgrade-863300 dockerd[928]: time="2025-12-05T07:42:51.792259441Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:42:52 kubernetes-upgrade-863300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:42:52 kubernetes-upgrade-863300 cri-dockerd[1226]: time="2025-12-05T07:42:52Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:42:52 kubernetes-upgrade-863300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +10.137467] CPU: 7 PID: 388349 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f0fc1fe7b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f0fc1fe7af6.
	[  +0.000001] RSP: 002b:00007ffe3c6595e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.844900] CPU: 9 PID: 388522 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f52a1d95b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f52a1d95af6.
	[  +0.000000] RSP: 002b:00007ffee986e010 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +9.275386] tmpfs: Unknown parameter 'noswap'
	[Dec 5 07:54] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:55:39 up  3:29,  0 user,  load average: 2.80, 3.90, 3.67
	Linux kubernetes-upgrade-863300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:55:36 kubernetes-upgrade-863300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:36 kubernetes-upgrade-863300 kubelet[26099]: E1205 07:55:36.388182   26099 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:36 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:36 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 338.
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:37 kubernetes-upgrade-863300 kubelet[26119]: E1205 07:55:37.117065   26119 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 339.
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:37 kubernetes-upgrade-863300 kubelet[26166]: E1205 07:55:37.887992   26166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:37 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:38 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 340.
	Dec 05 07:55:38 kubernetes-upgrade-863300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:38 kubernetes-upgrade-863300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:38 kubernetes-upgrade-863300 kubelet[26240]: E1205 07:55:38.617424   26240 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:38 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:38 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:39 kubernetes-upgrade-863300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 341.
	Dec 05 07:55:39 kubernetes-upgrade-863300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:39 kubernetes-upgrade-863300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-863300 -n kubernetes-upgrade-863300
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-863300 -n kubernetes-upgrade-863300: exit status 2 (600.6781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-863300" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-863300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-863300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-863300: (3.2074926s)
--- FAIL: TestKubernetesUpgrade (850.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (528.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-104100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
E1205 07:47:23.964067    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:47:42.897762    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-104100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m45.7860854s)

                                                
                                                
-- stdout --
	* [no-preload-104100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "no-preload-104100" primary control-plane node in "no-preload-104100" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:47:11.980106    3504 out.go:360] Setting OutFile to fd 1212 ...
	I1205 07:47:12.037102    3504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:47:12.037102    3504 out.go:374] Setting ErrFile to fd 1084...
	I1205 07:47:12.037102    3504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:47:12.055100    3504 out.go:368] Setting JSON to false
	I1205 07:47:12.058108    3504 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12089,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:47:12.058108    3504 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:47:12.062102    3504 out.go:179] * [no-preload-104100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:47:12.067104    3504 notify.go:221] Checking for updates...
	I1205 07:47:12.070111    3504 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:47:12.073098    3504 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:47:12.075105    3504 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:47:12.078101    3504 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:47:12.081103    3504 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:47:12.085102    3504 config.go:182] Loaded profile config "cert-expiration-463600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:47:12.086101    3504 config.go:182] Loaded profile config "kubernetes-upgrade-863300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:47:12.086101    3504 config.go:182] Loaded profile config "old-k8s-version-648900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1205 07:47:12.086101    3504 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:47:12.209101    3504 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:47:12.213099    3504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:47:12.478276    3504 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:106 SystemTime:2025-12-05 07:47:12.459203434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:47:12.486256    3504 out.go:179] * Using the docker driver based on user configuration
	I1205 07:47:12.489262    3504 start.go:309] selected driver: docker
	I1205 07:47:12.489262    3504 start.go:927] validating driver "docker" against <nil>
	I1205 07:47:12.489262    3504 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:47:12.534484    3504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:47:12.798184    3504 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:47:12.780901866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:47:12.798184    3504 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:47:12.799185    3504 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:47:12.923117    3504 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 07:47:12.941923    3504 cni.go:84] Creating CNI manager for ""
	I1205 07:47:12.942188    3504 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:47:12.942188    3504 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 07:47:12.942343    3504 start.go:353] cluster config:
	{Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:47:12.987356    3504 out.go:179] * Starting "no-preload-104100" primary control-plane node in "no-preload-104100" cluster
	I1205 07:47:12.992672    3504 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:47:13.002167    3504 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:47:13.008256    3504 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:47:13.008521    3504 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:47:13.008735    3504 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\config.json ...
	I1205 07:47:13.008735    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:47:13.008735    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:47:13.008837    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:47:13.008953    3504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\config.json: {Name:mkb1c77ddefcad4495e8744b0a0cdc2c3778f879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:13.008905    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:47:13.009020    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:47:13.009020    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:47:13.009020    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:47:13.009020    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:47:13.401509    3504 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:47:13.401509    3504 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:47:13.401509    3504 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:47:13.401509    3504 start.go:360] acquireMachinesLock for no-preload-104100: {Name:mk6569d967c60dcd29e05d158ce4a7a18e59aa2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:13.401509    3504 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-104100"
	I1205 07:47:13.401509    3504 start.go:93] Provisioning new machine with config: &{Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:47:13.402497    3504 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:47:13.409508    3504 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:47:13.409508    3504 start.go:159] libmachine.API.Create for "no-preload-104100" (driver="docker")
	I1205 07:47:13.409508    3504 client.go:173] LocalClient.Create starting
	I1205 07:47:13.410499    3504 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 07:47:13.410499    3504 main.go:143] libmachine: Decoding PEM data...
	I1205 07:47:13.410499    3504 main.go:143] libmachine: Parsing certificate...
	I1205 07:47:13.410499    3504 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 07:47:13.410499    3504 main.go:143] libmachine: Decoding PEM data...
	I1205 07:47:13.410499    3504 main.go:143] libmachine: Parsing certificate...
	I1205 07:47:13.417521    3504 cli_runner.go:164] Run: docker network inspect no-preload-104100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:47:13.527820    3504 cli_runner.go:211] docker network inspect no-preload-104100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:47:13.536596    3504 network_create.go:284] running [docker network inspect no-preload-104100] to gather additional debugging logs...
	I1205 07:47:13.536671    3504 cli_runner.go:164] Run: docker network inspect no-preload-104100
	W1205 07:47:13.927867    3504 cli_runner.go:211] docker network inspect no-preload-104100 returned with exit code 1
	I1205 07:47:13.927867    3504 network_create.go:287] error running [docker network inspect no-preload-104100]: docker network inspect no-preload-104100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-104100 not found
	I1205 07:47:13.927867    3504 network_create.go:289] output of [docker network inspect no-preload-104100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-104100 not found
	
	** /stderr **
	I1205 07:47:13.933867    3504 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:47:15.107252    3504 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:47:15.219713    3504 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:47:15.303980    3504 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:47:15.340250    3504 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:47:15.390188    3504 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:47:15.424224    3504 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:47:15.452454    3504 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9be00}
	I1205 07:47:15.453168    3504 network_create.go:124] attempt to create docker network no-preload-104100 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1205 07:47:15.460950    3504 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-104100 no-preload-104100
	I1205 07:47:15.736630    3504 network_create.go:108] docker network no-preload-104100 192.168.103.0/24 created
	I1205 07:47:15.736630    3504 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-104100" container
	I1205 07:47:15.767770    3504 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:47:15.873058    3504 cli_runner.go:164] Run: docker volume create no-preload-104100 --label name.minikube.sigs.k8s.io=no-preload-104100 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:47:15.983782    3504 oci.go:103] Successfully created a docker volume no-preload-104100
	I1205 07:47:15.989782    3504 cli_runner.go:164] Run: docker run --rm --name no-preload-104100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-104100 --entrypoint /usr/bin/test -v no-preload-104100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:47:16.484963    3504 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.485966    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 07:47:16.485966    3504 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.4768909s
	I1205 07:47:16.485966    3504 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 07:47:16.485966    3504 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.486981    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:47:16.486981    3504 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.4779059s
	I1205 07:47:16.486981    3504 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:47:16.488979    3504 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.488979    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:47:16.488979    3504 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.4801888s
	I1205 07:47:16.488979    3504 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:47:16.494566    3504 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.495575    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 07:47:16.495575    3504 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.4864998s
	I1205 07:47:16.495575    3504 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 07:47:16.551563    3504 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.551563    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 07:47:16.551563    3504 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.5424871s
	I1205 07:47:16.551563    3504 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:47:16.601451    3504 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.601980    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 07:47:16.602113    3504 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.5930356s
	I1205 07:47:16.602113    3504 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 07:47:16.646349    3504 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.647355    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 07:47:16.647355    3504 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.6385623s
	I1205 07:47:16.647355    3504 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 07:47:16.830373    3504 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:47:16.831367    3504 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:47:16.831367    3504 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.8224018s
	I1205 07:47:16.831367    3504 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:47:16.831367    3504 cache.go:87] Successfully saved all images to host disk.
	I1205 07:47:17.544899    3504 cli_runner.go:217] Completed: docker run --rm --name no-preload-104100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-104100 --entrypoint /usr/bin/test -v no-preload-104100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.5550914s)
	I1205 07:47:17.544899    3504 oci.go:107] Successfully prepared a docker volume no-preload-104100
	I1205 07:47:17.544899    3504 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:47:17.550153    3504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:47:17.786397    3504 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:47:17.763707945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:47:17.790387    3504 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:47:18.037341    3504 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-104100 --name no-preload-104100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-104100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-104100 --network no-preload-104100 --ip 192.168.103.2 --volume no-preload-104100:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:47:18.774295    3504 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Running}}
	I1205 07:47:18.850624    3504 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:47:18.911904    3504 cli_runner.go:164] Run: docker exec no-preload-104100 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:47:19.032933    3504 oci.go:144] the created container "no-preload-104100" has a running status.
	I1205 07:47:19.032984    3504 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa...
	I1205 07:47:19.154579    3504 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:47:19.233905    3504 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:47:19.295033    3504 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:47:19.295033    3504 kic_runner.go:114] Args: [docker exec --privileged no-preload-104100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:47:19.433793    3504 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa...
	I1205 07:47:21.810142    3504 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:47:21.864698    3504 machine.go:94] provisionDockerMachine start ...
	I1205 07:47:21.869167    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:21.921175    3504 main.go:143] libmachine: Using SSH client type: native
	I1205 07:47:21.922176    3504 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60495 <nil> <nil>}
	I1205 07:47:21.922176    3504 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:47:22.107390    3504 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-104100
	
	I1205 07:47:22.107390    3504 ubuntu.go:182] provisioning hostname "no-preload-104100"
	I1205 07:47:22.111598    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:22.179564    3504 main.go:143] libmachine: Using SSH client type: native
	I1205 07:47:22.179564    3504 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60495 <nil> <nil>}
	I1205 07:47:22.179564    3504 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-104100 && echo "no-preload-104100" | sudo tee /etc/hostname
	I1205 07:47:22.387381    3504 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-104100
	
	I1205 07:47:22.392442    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:22.451662    3504 main.go:143] libmachine: Using SSH client type: native
	I1205 07:47:22.451662    3504 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60495 <nil> <nil>}
	I1205 07:47:22.451662    3504 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-104100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-104100/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-104100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:47:22.639946    3504 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:47:22.639946    3504 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:47:22.639946    3504 ubuntu.go:190] setting up certificates
	I1205 07:47:22.639946    3504 provision.go:84] configureAuth start
	I1205 07:47:22.644132    3504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-104100
	I1205 07:47:22.700987    3504 provision.go:143] copyHostCerts
	I1205 07:47:22.701567    3504 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:47:22.701612    3504 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:47:22.701940    3504 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:47:22.702966    3504 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:47:22.702966    3504 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:47:22.703299    3504 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:47:22.704215    3504 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:47:22.704215    3504 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:47:22.704468    3504 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:47:22.704656    3504 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-104100 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-104100]
	I1205 07:47:22.821927    3504 provision.go:177] copyRemoteCerts
	I1205 07:47:22.826188    3504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:47:22.829547    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:22.887515    3504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:47:23.012975    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:47:23.046224    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:47:23.077643    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:47:23.114217    3504 provision.go:87] duration metric: took 474.2145ms to configureAuth
	I1205 07:47:23.114257    3504 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:47:23.114770    3504 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:47:23.119022    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:23.180794    3504 main.go:143] libmachine: Using SSH client type: native
	I1205 07:47:23.180794    3504 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60495 <nil> <nil>}
	I1205 07:47:23.180794    3504 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 07:47:23.379393    3504 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 07:47:23.379393    3504 ubuntu.go:71] root file system type: overlay
	I1205 07:47:23.379393    3504 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 07:47:23.383264    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:23.442575    3504 main.go:143] libmachine: Using SSH client type: native
	I1205 07:47:23.443186    3504 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60495 <nil> <nil>}
	I1205 07:47:23.443186    3504 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 07:47:23.645577    3504 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 07:47:23.649486    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:23.711426    3504 main.go:143] libmachine: Using SSH client type: native
	I1205 07:47:23.712185    3504 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60495 <nil> <nil>}
	I1205 07:47:23.712231    3504 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 07:47:25.095104    3504 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 07:47:23.636932947 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 07:47:25.095104    3504 machine.go:97] duration metric: took 3.2303543s to provisionDockerMachine
	I1205 07:47:25.095104    3504 client.go:176] duration metric: took 11.6854108s to LocalClient.Create
	I1205 07:47:25.095104    3504 start.go:167] duration metric: took 11.6854108s to libmachine.API.Create "no-preload-104100"
	I1205 07:47:25.095104    3504 start.go:293] postStartSetup for "no-preload-104100" (driver="docker")
	I1205 07:47:25.095104    3504 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:47:25.100250    3504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:47:25.102625    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:25.164238    3504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:47:25.303282    3504 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:47:25.311717    3504 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:47:25.311717    3504 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:47:25.311717    3504 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 07:47:25.311717    3504 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 07:47:25.312618    3504 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 07:47:25.317337    3504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:47:25.339724    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 07:47:25.371202    3504 start.go:296] duration metric: took 276.0942ms for postStartSetup
	I1205 07:47:25.377397    3504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-104100
	I1205 07:47:25.437395    3504 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\config.json ...
	I1205 07:47:25.443386    3504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:47:25.446388    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:25.501106    3504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:47:25.665833    3504 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:47:25.677278    3504 start.go:128] duration metric: took 12.2745859s to createHost
	I1205 07:47:25.677278    3504 start.go:83] releasing machines lock for "no-preload-104100", held for 12.2755744s
	I1205 07:47:25.681003    3504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-104100
	I1205 07:47:25.734270    3504 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 07:47:25.739355    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:25.739355    3504 ssh_runner.go:195] Run: cat /version.json
	I1205 07:47:25.742657    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:25.797134    3504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:47:25.802370    3504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	W1205 07:47:25.917388    3504 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 07:47:25.922931    3504 ssh_runner.go:195] Run: systemctl --version
	I1205 07:47:25.942155    3504 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:47:25.953637    3504 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:47:25.956631    3504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:47:26.001426    3504 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:47:26.001523    3504 start.go:496] detecting cgroup driver to use...
	I1205 07:47:26.001523    3504 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:47:26.001623    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1205 07:47:26.017355    3504 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 07:47:26.017447    3504 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 07:47:26.036561    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:47:26.056979    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:47:26.071476    3504 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:47:26.074462    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:47:26.094204    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:47:26.114424    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:47:26.137984    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:47:26.159885    3504 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:47:26.177882    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:47:26.195885    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:47:26.216706    3504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:47:26.237446    3504 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:47:26.261841    3504 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:47:26.279098    3504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:47:26.442535    3504 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:47:26.590393    3504 start.go:496] detecting cgroup driver to use...
	I1205 07:47:26.590924    3504 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:47:26.596092    3504 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 07:47:26.620896    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:47:26.647887    3504 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:47:26.723047    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:47:26.746878    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:47:26.765436    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:47:26.794458    3504 ssh_runner.go:195] Run: which cri-dockerd
	I1205 07:47:26.806005    3504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 07:47:26.822393    3504 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 07:47:26.850632    3504 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 07:47:26.975036    3504 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 07:47:27.136874    3504 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 07:47:27.137875    3504 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 07:47:27.169867    3504 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 07:47:27.200892    3504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:47:27.380330    3504 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 07:47:28.383683    3504 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0032932s)
	I1205 07:47:28.388495    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:47:28.420502    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 07:47:28.446771    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:47:28.470100    3504 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 07:47:28.611169    3504 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 07:47:28.765214    3504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:47:28.925579    3504 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 07:47:28.950924    3504 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 07:47:28.976031    3504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:47:29.113835    3504 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 07:47:29.219482    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:47:29.238623    3504 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 07:47:29.243706    3504 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 07:47:29.250159    3504 start.go:564] Will wait 60s for crictl version
	I1205 07:47:29.256260    3504 ssh_runner.go:195] Run: which crictl
	I1205 07:47:29.270511    3504 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:47:29.309361    3504 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 07:47:29.313075    3504 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:47:29.362947    3504 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:47:29.408666    3504 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 07:47:29.412710    3504 cli_runner.go:164] Run: docker exec -t no-preload-104100 dig +short host.docker.internal
	I1205 07:47:29.558753    3504 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 07:47:29.563033    3504 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 07:47:29.574127    3504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:47:29.593550    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:47:29.648375    3504 kubeadm.go:884] updating cluster {Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:47:29.648375    3504 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:47:29.651367    3504 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 07:47:29.682455    3504 docker.go:691] Got preloaded images: 
	I1205 07:47:29.682455    3504 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1205 07:47:29.682455    3504 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:47:29.694868    3504 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:47:29.700649    3504 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:47:29.706088    3504 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:47:29.709561    3504 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:47:29.710885    3504 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:47:29.716742    3504 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:47:29.716742    3504 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:47:29.716742    3504 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:47:29.722929    3504 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:47:29.724432    3504 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:47:29.730711    3504 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:47:29.732586    3504 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:47:29.742169    3504 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:47:29.742169    3504 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:47:29.746232    3504 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:47:29.751305    3504 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1205 07:47:29.779759    3504 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:47:29.828808    3504 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:47:29.884618    3504 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:47:29.936487    3504 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:47:29.990986    3504 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:47:30.041189    3504 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:47:30.092434    3504 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:47:30.149143    3504 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1205 07:47:30.356084    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:47:30.356753    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 07:47:30.367021    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:47:30.367628    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:47:30.391854    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 07:47:30.399401    3504 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1205 07:47:30.399401    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:47:30.399401    3504 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:47:30.401163    3504 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1205 07:47:30.401163    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:47:30.401163    3504 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:47:30.405887    3504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:47:30.406473    3504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1205 07:47:30.408208    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:47:30.410837    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:47:30.416309    3504 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1205 07:47:30.416309    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:47:30.416309    3504 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1205 07:47:30.416309    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:47:30.416309    3504 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:47:30.416309    3504 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:47:30.422486    3504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:47:30.423671    3504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:47:30.436443    3504 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1205 07:47:30.436443    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:47:30.436443    3504 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:47:30.440415    3504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:47:30.521495    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:47:30.523409    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:47:30.523409    3504 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1205 07:47:30.523480    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:47:30.523536    3504 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:47:30.526517    3504 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1205 07:47:30.526517    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:47:30.526517    3504 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:47:30.529515    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:47:30.529515    3504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:47:30.530867    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:47:30.532177    3504 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:47:30.615347    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:47:30.619352    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:47:30.622420    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:47:30.622479    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:47:30.622654    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1205 07:47:30.623484    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:47:30.627295    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:47:30.629858    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:47:30.706446    3504 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:47:30.728950    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:47:30.728950    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:47:30.728950    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1205 07:47:30.733947    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:47:30.737950    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:47:30.737950    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:47:30.737950    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:47:30.737950    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1205 07:47:30.737950    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1205 07:47:30.741951    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:47:30.748945    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:47:30.748945    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1205 07:47:30.827964    3504 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 07:47:30.827964    3504 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:47:30.827964    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:47:30.827964    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:47:30.827964    3504 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:47:30.828616    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1205 07:47:30.828616    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1205 07:47:30.835359    3504 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:47:30.874820    3504 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:47:30.874875    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1205 07:47:31.027796    3504 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:47:31.034797    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:47:31.108794    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1205 07:47:31.233796    3504 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:47:31.233796    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1205 07:47:31.947659    3504 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:47:31.947659    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1205 07:47:37.118731    3504 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (5.1709894s)
	I1205 07:47:37.118731    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:47:37.118731    3504 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:47:37.118731    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1205 07:47:37.727697    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1205 07:47:37.727697    3504 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:47:37.727697    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1205 07:47:39.046848    3504 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (1.3191299s)
	I1205 07:47:39.046848    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:47:39.046848    3504 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:47:39.046848    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1205 07:47:42.107765    3504 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (3.0608687s)
	I1205 07:47:42.107765    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1205 07:47:42.107765    3504 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:47:42.107765    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1205 07:47:43.725598    3504 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.6178072s)
	I1205 07:47:43.725598    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:47:43.725598    3504 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:47:43.725598    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1205 07:47:46.143519    3504 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (2.4178834s)
	I1205 07:47:46.143519    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:47:46.143519    3504 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:47:46.143519    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1205 07:47:47.580982    3504 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.4374403s)
	I1205 07:47:47.580982    3504 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1205 07:47:47.580982    3504 cache_images.go:125] Successfully loaded all cached images
	I1205 07:47:47.580982    3504 cache_images.go:94] duration metric: took 17.8982444s to LoadCachedImages
	I1205 07:47:47.580982    3504 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 07:47:47.580982    3504 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-104100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:47:47.584596    3504 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 07:47:47.660972    3504 cni.go:84] Creating CNI manager for ""
	I1205 07:47:47.661040    3504 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:47:47.661040    3504 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:47:47.661040    3504 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-104100 NodeName:no-preload-104100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:47:47.661288    3504 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-104100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:47:47.665743    3504 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:47:47.679795    3504 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:47:47.684305    3504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:47:47.698173    3504 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1205 07:47:47.698173    3504 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1205 07:47:47.698173    3504 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1205 07:47:47.703935    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:47:47.703935    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:47:47.703935    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:47:47.725316    3504 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:47:47.725316    3504 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:47:47.725316    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1205 07:47:47.725316    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1205 07:47:47.729308    3504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:47:47.745309    3504 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:47:47.745309    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1205 07:47:49.646820    3504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:47:49.659819    3504 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1205 07:47:49.680466    3504 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:47:49.704048    3504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1205 07:47:49.728606    3504 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:47:49.739265    3504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:47:49.760671    3504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:47:49.914693    3504 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:47:49.937876    3504 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100 for IP: 192.168.103.2
	I1205 07:47:49.937876    3504 certs.go:195] generating shared ca certs ...
	I1205 07:47:49.937876    3504 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:49.939286    3504 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 07:47:49.939456    3504 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 07:47:49.939456    3504 certs.go:257] generating profile certs ...
	I1205 07:47:49.940222    3504 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\client.key
	I1205 07:47:49.940423    3504 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\client.crt with IP's: []
	I1205 07:47:50.193675    3504 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\client.crt ...
	I1205 07:47:50.193675    3504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\client.crt: {Name:mk0d49cd493d4c49e8fa127a7797059dedcd421a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:50.194736    3504 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\client.key ...
	I1205 07:47:50.194736    3504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\client.key: {Name:mkd77b9f1372dcf0c04f725476d8bec1d9aa5b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:50.195719    3504 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key.f2627f70
	I1205 07:47:50.195719    3504 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.crt.f2627f70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1205 07:47:50.517779    3504 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.crt.f2627f70 ...
	I1205 07:47:50.517779    3504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.crt.f2627f70: {Name:mk52f278a06f3a17b8b004046b044d853d3fa8cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:50.518729    3504 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key.f2627f70 ...
	I1205 07:47:50.518729    3504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key.f2627f70: {Name:mkb63d0aa1a7b778afe1ef3b2bbb4521930dff0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:50.519670    3504 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.crt.f2627f70 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.crt
	I1205 07:47:50.532837    3504 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key.f2627f70 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key
	I1205 07:47:50.534530    3504 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.key
	I1205 07:47:50.534614    3504 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.crt with IP's: []
	I1205 07:47:50.603047    3504 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.crt ...
	I1205 07:47:50.603047    3504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.crt: {Name:mka599af866ed6e0ae25c57f1efd86fe304cb900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:50.604049    3504 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.key ...
	I1205 07:47:50.604049    3504 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.key: {Name:mk88f61313fd96b29d61b633387373d10c0c95f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:47:50.618526    3504 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 07:47:50.619074    3504 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 07:47:50.619074    3504 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 07:47:50.619223    3504 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 07:47:50.619223    3504 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 07:47:50.619223    3504 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 07:47:50.619844    3504 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 07:47:50.620218    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:47:50.652268    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:47:50.681558    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:47:50.712149    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:47:50.739752    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:47:50.770533    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:47:50.799558    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:47:50.831590    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:47:50.866400    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 07:47:50.896425    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:47:50.925171    3504 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 07:47:50.953181    3504 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:47:50.978152    3504 ssh_runner.go:195] Run: openssl version
	I1205 07:47:50.992944    3504 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:47:51.009361    3504 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:47:51.028773    3504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:47:51.038323    3504 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:47:51.041798    3504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:47:51.089989    3504 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:47:51.108176    3504 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:47:51.128607    3504 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 07:47:51.149062    3504 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 07:47:51.169701    3504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 07:47:51.178345    3504 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 07:47:51.182229    3504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 07:47:51.230030    3504 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:47:51.248034    3504 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8036.pem /etc/ssl/certs/51391683.0
	I1205 07:47:51.266300    3504 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 07:47:51.284977    3504 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 07:47:51.303512    3504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 07:47:51.312091    3504 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 07:47:51.316859    3504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 07:47:51.374351    3504 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:47:51.395028    3504 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/80362.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:47:51.415116    3504 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:47:51.422110    3504 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:47:51.422110    3504 kubeadm.go:401] StartCluster: {Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:47:51.426947    3504 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 07:47:51.463902    3504 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:47:51.482191    3504 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:47:51.507524    3504 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:47:51.511774    3504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:47:51.525479    3504 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:47:51.525479    3504 kubeadm.go:158] found existing configuration files:
	
	I1205 07:47:51.529277    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:47:51.543996    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:47:51.548627    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:47:51.570058    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:47:51.586137    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:47:51.590737    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:47:51.613489    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:47:51.627391    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:47:51.632999    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:47:51.651503    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:47:51.665513    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:47:51.669502    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:47:51.689506    3504 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:47:51.804971    3504 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 07:47:51.888875    3504 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:47:51.991241    3504 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:51:54.010627    3504 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:51:54.011616    3504 kubeadm.go:319] 
	I1205 07:51:54.011616    3504 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:51:54.013614    3504 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:51:54.013614    3504 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:51:54.014629    3504 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:51:54.014629    3504 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:51:54.014629    3504 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:51:54.014629    3504 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:51:54.014629    3504 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:51:54.014629    3504 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:51:54.014629    3504 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:51:54.014629    3504 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:51:54.014629    3504 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:51:54.015626    3504 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:51:54.016818    3504 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:51:54.017000    3504 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:51:54.017101    3504 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:51:54.017305    3504 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:51:54.017483    3504 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:51:54.017650    3504 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:51:54.017827    3504 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:51:54.018002    3504 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:51:54.018107    3504 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:51:54.018227    3504 kubeadm.go:319] OS: Linux
	I1205 07:51:54.018409    3504 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:51:54.018458    3504 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:51:54.018652    3504 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:51:54.018773    3504 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:51:54.018822    3504 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:51:54.018943    3504 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:51:54.019070    3504 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:51:54.019185    3504 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:51:54.019368    3504 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:51:54.019594    3504 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:51:54.019902    3504 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:51:54.020176    3504 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:51:54.020430    3504 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:51:54.023175    3504 out.go:252]   - Generating certificates and keys ...
	I1205 07:51:54.023175    3504 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:51:54.023175    3504 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:51:54.023175    3504 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:51:54.023715    3504 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:51:54.023790    3504 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:51:54.023790    3504 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:51:54.023790    3504 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:51:54.024317    3504 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-104100] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:51:54.024348    3504 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:51:54.024348    3504 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-104100] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1205 07:51:54.024894    3504 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:51:54.025078    3504 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:51:54.025078    3504 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:51:54.025078    3504 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:51:54.025078    3504 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:51:54.025078    3504 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:51:54.025078    3504 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:51:54.025078    3504 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:51:54.025078    3504 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:51:54.026039    3504 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:51:54.026039    3504 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:51:54.029085    3504 out.go:252]   - Booting up control plane ...
	I1205 07:51:54.029085    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:51:54.029085    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:51:54.029085    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:51:54.029085    3504 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:51:54.029085    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:51:54.030087    3504 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:51:54.030087    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:51:54.030087    3504 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:51:54.030087    3504 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:51:54.030087    3504 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:51:54.030087    3504 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000129446s
	I1205 07:51:54.030087    3504 kubeadm.go:319] 
	I1205 07:51:54.030087    3504 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:51:54.031171    3504 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:51:54.031171    3504 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:51:54.031171    3504 kubeadm.go:319] 
	I1205 07:51:54.031732    3504 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:51:54.031732    3504 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:51:54.031732    3504 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:51:54.031732    3504 kubeadm.go:319] 
	W1205 07:51:54.031732    3504 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-104100] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-104100] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-104100] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-104100] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:51:54.035731    3504 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 07:51:54.509941    3504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:51:54.527941    3504 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:51:54.531939    3504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:51:54.547956    3504 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:51:54.547956    3504 kubeadm.go:158] found existing configuration files:
	
	I1205 07:51:54.552943    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:51:54.568940    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:51:54.572945    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:51:54.593957    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:51:54.611965    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:51:54.616962    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:51:54.633939    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:51:54.646964    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:51:54.650947    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:51:54.666942    3504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:51:54.678949    3504 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:51:54.682951    3504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:51:54.704974    3504 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:51:54.851892    3504 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 07:51:54.954709    3504 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:51:55.101021    3504 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:55:56.232863    3504 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:55:56.233024    3504 kubeadm.go:319] 
	I1205 07:55:56.233374    3504 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:55:56.238199    3504 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:55:56.238199    3504 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:55:56.239229    3504 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:55:56.239951    3504 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:55:56.240038    3504 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:55:56.240149    3504 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:55:56.240900    3504 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:55:56.240989    3504 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:55:56.241160    3504 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:55:56.241262    3504 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:55:56.241353    3504 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:55:56.241527    3504 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:55:56.241709    3504 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:55:56.241841    3504 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:55:56.241965    3504 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:55:56.242178    3504 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:55:56.242300    3504 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:55:56.242449    3504 kubeadm.go:319] OS: Linux
	I1205 07:55:56.242570    3504 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:55:56.242721    3504 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:55:56.243457    3504 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:55:56.243517    3504 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:55:56.243675    3504 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:55:56.243773    3504 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:55:56.592452    3504 out.go:252]   - Generating certificates and keys ...
	I1205 07:55:56.593639    3504 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:55:56.593845    3504 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:55:56.594114    3504 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:55:56.594161    3504 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:55:56.594421    3504 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:55:56.594527    3504 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:55:56.594848    3504 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:55:56.594994    3504 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:55:56.595183    3504 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:55:56.595515    3504 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:55:56.595613    3504 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:55:56.595780    3504 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:55:56.595940    3504 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:55:56.596106    3504 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:55:56.596218    3504 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:55:56.596381    3504 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:55:56.596498    3504 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:55:56.596674    3504 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:55:56.596833    3504 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:55:56.652657    3504 out.go:252]   - Booting up control plane ...
	I1205 07:55:56.653102    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:55:56.653292    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:55:56.653474    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:55:56.653708    3504 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:55:56.653923    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:55:56.654155    3504 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:55:56.654392    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:55:56.654499    3504 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:55:56.654779    3504 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:55:56.655037    3504 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:55:56.655160    3504 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001446272s
	I1205 07:55:56.655263    3504 kubeadm.go:319] 
	I1205 07:55:56.655375    3504 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:55:56.655475    3504 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:55:56.655710    3504 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:55:56.655741    3504 kubeadm.go:319] 
	I1205 07:55:56.655926    3504 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:55:56.656132    3504 kubeadm.go:319] 
	I1205 07:55:56.656232    3504 kubeadm.go:403] duration metric: took 8m5.2264324s to StartCluster
	I1205 07:55:56.656382    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:55:56.660935    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:55:56.720992    3504 cri.go:89] found id: ""
	I1205 07:55:56.720992    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.720992    3504 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:55:56.720992    3504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:55:56.726101    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:55:56.779606    3504 cri.go:89] found id: ""
	I1205 07:55:56.779629    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.779629    3504 logs.go:284] No container was found matching "etcd"
	I1205 07:55:56.779681    3504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:55:56.783808    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:55:56.856128    3504 cri.go:89] found id: ""
	I1205 07:55:56.856232    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.856232    3504 logs.go:284] No container was found matching "coredns"
	I1205 07:55:56.856262    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:55:56.860617    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:55:56.903334    3504 cri.go:89] found id: ""
	I1205 07:55:56.903419    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.903419    3504 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:55:56.903419    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:55:56.907807    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:55:56.970846    3504 cri.go:89] found id: ""
	I1205 07:55:56.970898    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.970898    3504 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:55:56.970898    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:55:56.975641    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:55:57.023174    3504 cri.go:89] found id: ""
	I1205 07:55:57.023174    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.023174    3504 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:55:57.023174    3504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:55:57.027175    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:55:57.077156    3504 cri.go:89] found id: ""
	I1205 07:55:57.077156    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.077156    3504 logs.go:284] No container was found matching "kindnet"
	I1205 07:55:57.077156    3504 logs.go:123] Gathering logs for dmesg ...
	I1205 07:55:57.077156    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:55:57.117328    3504 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:55:57.117328    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:55:57.220104    3504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:55:57.221075    3504 logs.go:123] Gathering logs for Docker ...
	I1205 07:55:57.221075    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:55:57.251103    3504 logs.go:123] Gathering logs for container status ...
	I1205 07:55:57.251103    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:55:57.303905    3504 logs.go:123] Gathering logs for kubelet ...
	I1205 07:55:57.303905    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:55:57.367440    3504 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:55:57.367440    3504 out.go:285] * 
	* 
	W1205 07:55:57.367440    3504 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.367440    3504 out.go:285] * 
	* 
	W1205 07:55:57.369216    3504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:55:57.540920    3504 out.go:203] 
	W1205 07:55:57.554724    3504 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.554966    3504 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:55:57.554966    3504 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:55:57.597149    3504 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-104100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-104100
helpers_test.go:243: (dbg) docker inspect no-preload-104100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043",
	        "Created": "2025-12-05T07:47:18.090294673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:47:18.384905784Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hosts",
	        "LogPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043-json.log",
	        "Name": "/no-preload-104100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-104100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-104100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-104100",
	                "Source": "/var/lib/docker/volumes/no-preload-104100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-104100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-104100",
	                "name.minikube.sigs.k8s.io": "no-preload-104100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9cf4340ae5aa61b1664fdb6401e79df00ee5d95456b58c783a5450634e707fb",
	            "SandboxKey": "/var/run/docker/netns/f9cf4340ae5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60497"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60499"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60500"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-104100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "707b5f83051fc4c181f3506b97f5ea358824531428895a55938badd3159b6c9f",
	                    "EndpointID": "17b4da3586c46e948162b9510e7b2371f3a3cf1ebbe0c711b2fa91578460e0c9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-104100",
	                        "5f2a793d7573"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 6 (632.5341ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:55:58.792354    8568 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25: (1.1761848s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-218000 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/kubernetes/kubelet.conf                                                           │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /var/lib/kubelet/config.yaml                                                           │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status docker --all --full --no-pager                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat docker --no-pager                                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/docker/daemon.json                                                                │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo docker system info                                                                         │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status cri-docker --all --full --no-pager                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat cri-docker --no-pager                                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cri-dockerd --version                                                                      │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status containerd --all --full --no-pager                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat containerd --no-pager                                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/containerd/config.toml                                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo containerd config dump                                                                     │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status crio --all --full --no-pager                                              │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	│ ssh     │ -p auto-218000 sudo systemctl cat crio --no-pager                                                              │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo crio config                                                                                │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ delete  │ -p auto-218000                                                                                                 │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ start   │ -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-863300                                                                                   │ kubernetes-upgrade-863300 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ start   │ -p calico-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker   │ calico-218000             │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:55:43
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:55:43.385785   11048 out.go:360] Setting OutFile to fd 1688 ...
	I1205 07:55:43.445538   11048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:43.445538   11048 out.go:374] Setting ErrFile to fd 840...
	I1205 07:55:43.445538   11048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:43.460218   11048 out.go:368] Setting JSON to false
	I1205 07:55:43.463643   11048 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12601,"bootTime":1764908742,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:55:43.463643   11048 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:55:43.467039   11048 out.go:179] * [calico-218000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:55:43.472324   11048 notify.go:221] Checking for updates...
	I1205 07:55:43.475120   11048 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:55:43.478124   11048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:55:43.480125   11048 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:55:43.483116   11048 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:55:43.485128   11048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:55:43.488117   11048 config.go:182] Loaded profile config "kindnet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:55:43.489119   11048 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:43.489119   11048 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:43.489119   11048 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:55:43.623399   11048 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:55:43.627393   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:43.878533   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:43.85365759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:43.883492   11048 out.go:179] * Using the docker driver based on user configuration
	I1205 07:55:41.623253    3768 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (14.452724s)
	I1205 07:55:41.623253    3768 kic.go:203] duration metric: took 14.4566514s to extract preloaded images to volume ...
	I1205 07:55:41.627859    3768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:41.863901    3768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:93 SystemTime:2025-12-05 07:55:41.838776023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:41.868259    3768 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:55:42.117545    3768 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-218000 --name kindnet-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-218000 --network kindnet-218000 --ip 192.168.94.2 --volume kindnet-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:55:43.388568    3768 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-218000 --name kindnet-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-218000 --network kindnet-218000 --ip 192.168.94.2 --volume kindnet-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b: (1.270844s)
	I1205 07:55:43.394720    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Running}}
	I1205 07:55:43.460218    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:43.516123    3768 cli_runner.go:164] Run: docker exec kindnet-218000 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:55:43.642405    3768 oci.go:144] the created container "kindnet-218000" has a running status.
	I1205 07:55:43.642405    3768 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa...
	I1205 07:55:43.880500    3768 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:55:43.953504    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:44.056802    3768 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:55:44.056802    3768 kic_runner.go:114] Args: [docker exec --privileged kindnet-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:55:43.885483   11048 start.go:309] selected driver: docker
	I1205 07:55:43.885483   11048 start.go:927] validating driver "docker" against <nil>
	I1205 07:55:43.885483   11048 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:55:43.929498   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:44.213788   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:44.194232325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:44.213788   11048 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:55:44.214786   11048 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:55:44.217784   11048 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 07:55:44.219783   11048 cni.go:84] Creating CNI manager for "calico"
	I1205 07:55:44.219783   11048 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 07:55:44.219783   11048 start.go:353] cluster config:
	{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:55:44.221785   11048 out.go:179] * Starting "calico-218000" primary control-plane node in "calico-218000" cluster
	I1205 07:55:44.225783   11048 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:55:44.227787   11048 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:55:44.231783   11048 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:55:44.231783   11048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:44.231783   11048 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1205 07:55:44.232783   11048 cache.go:65] Caching tarball of preloaded images
	I1205 07:55:44.232783   11048 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1205 07:55:44.232783   11048 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1205 07:55:44.232783   11048 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-218000\config.json ...
	I1205 07:55:44.232783   11048 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-218000\config.json: {Name:mk91c6afceb766415a42b808b03437547163f98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:55:44.319041   11048 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:55:44.319041   11048 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:55:44.319041   11048 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:55:44.319041   11048 start.go:360] acquireMachinesLock for calico-218000: {Name:mkaef444365c0a217df0cccc3ef485884ea3ee5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:55:44.319041   11048 start.go:364] duration metric: took 0s to acquireMachinesLock for "calico-218000"
	I1205 07:55:44.319041   11048 start.go:93] Provisioning new machine with config: &{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:55:44.319041   11048 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:55:44.322048   11048 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:55:44.323048   11048 start.go:159] libmachine.API.Create for "calico-218000" (driver="docker")
	I1205 07:55:44.323048   11048 client.go:173] LocalClient.Create starting
	I1205 07:55:44.323048   11048 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:44.331040   11048 cli_runner.go:164] Run: docker network inspect calico-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:55:44.384044   11048 cli_runner.go:211] docker network inspect calico-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:55:44.387041   11048 network_create.go:284] running [docker network inspect calico-218000] to gather additional debugging logs...
	I1205 07:55:44.387041   11048 cli_runner.go:164] Run: docker network inspect calico-218000
	W1205 07:55:44.437033   11048 cli_runner.go:211] docker network inspect calico-218000 returned with exit code 1
	I1205 07:55:44.437033   11048 network_create.go:287] error running [docker network inspect calico-218000]: docker network inspect calico-218000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-218000 not found
	I1205 07:55:44.437033   11048 network_create.go:289] output of [docker network inspect calico-218000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-218000 not found
	
	** /stderr **
	I1205 07:55:44.440033   11048 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:55:44.518567   11048 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.549565   11048 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.581054   11048 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.612112   11048 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.644105   11048 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.676183   11048 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.707994   11048 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.726521   11048 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001705a40}
	I1205 07:55:44.726521   11048 network_create.go:124] attempt to create docker network calico-218000 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1205 07:55:44.730522   11048 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-218000 calico-218000
	I1205 07:55:44.872256   11048 network_create.go:108] docker network calico-218000 192.168.112.0/24 created
	I1205 07:55:44.872296   11048 kic.go:121] calculated static IP "192.168.112.2" for the "calico-218000" container
	I1205 07:55:44.887833   11048 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:55:44.947118   11048 cli_runner.go:164] Run: docker volume create calico-218000 --label name.minikube.sigs.k8s.io=calico-218000 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:55:45.020325   11048 oci.go:103] Successfully created a docker volume calico-218000
	I1205 07:55:45.024743   11048 cli_runner.go:164] Run: docker run --rm --name calico-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --entrypoint /usr/bin/test -v calico-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:55:46.189558   11048 cli_runner.go:217] Completed: docker run --rm --name calico-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --entrypoint /usr/bin/test -v calico-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.164797s)
	I1205 07:55:46.189558   11048 oci.go:107] Successfully prepared a docker volume calico-218000
	I1205 07:55:46.189558   11048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:46.189558   11048 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:55:46.195557   11048 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 07:55:44.191788    3768 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa...
	I1205 07:55:46.472372    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:46.525606    3768 machine.go:94] provisionDockerMachine start ...
	I1205 07:55:46.531024    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:46.591825    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:46.606706    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:46.606706    3768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:55:46.882633    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-218000
	
	I1205 07:55:46.882633    3768 ubuntu.go:182] provisioning hostname "kindnet-218000"
	I1205 07:55:46.886539    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:46.942319    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:46.943089    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:46.943089    3768 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-218000 && echo "kindnet-218000" | sudo tee /etc/hostname
	I1205 07:55:47.144763    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-218000
	
	I1205 07:55:47.148216    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.200257    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:47.200548    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:47.200548    3768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:55:47.383155    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:55:47.383235    3768 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:55:47.383267    3768 ubuntu.go:190] setting up certificates
	I1205 07:55:47.383348    3768 provision.go:84] configureAuth start
	I1205 07:55:47.386186    3768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-218000
	I1205 07:55:47.434188    3768 provision.go:143] copyHostCerts
	I1205 07:55:47.434188    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:55:47.434188    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:55:47.434188    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:55:47.435186    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:55:47.435186    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:55:47.435186    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:55:47.436186    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:55:47.436186    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:55:47.436186    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:55:47.437185    3768 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-218000 san=[127.0.0.1 192.168.94.2 kindnet-218000 localhost minikube]
	I1205 07:55:47.506006    3768 provision.go:177] copyRemoteCerts
	I1205 07:55:47.510770    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:55:47.513952    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.565725    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:47.689901    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:55:47.721502    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:55:47.749769    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1205 07:55:47.778148    3768 provision.go:87] duration metric: took 394.7705ms to configureAuth
	I1205 07:55:47.778148    3768 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:55:47.778148    3768 config.go:182] Loaded profile config "kindnet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:55:47.781148    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.831153    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:47.832148    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:47.832148    3768 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 07:55:48.034092    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 07:55:48.034092    3768 ubuntu.go:71] root file system type: overlay
	I1205 07:55:48.034092    3768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 07:55:48.038282    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:48.099329    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:48.099941    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:48.100168    3768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 07:55:48.308272    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 07:55:48.311928    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:48.367848    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:48.367927    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:48.367927    3768 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 07:55:56.232863    3504 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:55:56.233024    3504 kubeadm.go:319] 
	I1205 07:55:56.233374    3504 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:55:56.238199    3504 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:55:56.238199    3504 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:55:56.239229    3504 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:55:56.239951    3504 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:55:56.240038    3504 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:55:56.240149    3504 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:55:56.240900    3504 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:55:56.240989    3504 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:55:56.241160    3504 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:55:56.241262    3504 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:55:56.241353    3504 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:55:56.241527    3504 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:55:56.241709    3504 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:55:56.241841    3504 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:55:56.241965    3504 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:55:56.242178    3504 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:55:56.242300    3504 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:55:56.242449    3504 kubeadm.go:319] OS: Linux
	I1205 07:55:56.242570    3504 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:55:56.242721    3504 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:55:56.243457    3504 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:55:56.243517    3504 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:55:56.243675    3504 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:55:56.243773    3504 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:55:56.592452    3504 out.go:252]   - Generating certificates and keys ...
	I1205 07:55:56.593639    3504 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:55:56.593845    3504 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:55:56.594114    3504 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:55:56.594161    3504 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:55:56.594421    3504 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:55:56.594527    3504 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:55:56.594848    3504 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:55:56.594994    3504 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:55:56.595183    3504 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:55:56.595515    3504 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:55:56.595613    3504 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:55:56.595780    3504 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:55:56.595940    3504 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:55:56.596106    3504 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:55:56.596218    3504 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:55:56.596381    3504 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:55:56.596498    3504 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:55:56.596674    3504 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:55:56.596833    3504 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:55:56.652657    3504 out.go:252]   - Booting up control plane ...
	I1205 07:55:56.653102    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:55:56.653292    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:55:56.653474    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:55:56.653708    3504 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:55:56.653923    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:55:56.654155    3504 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:55:56.654392    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:55:56.654499    3504 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:55:56.654779    3504 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:55:56.655037    3504 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:55:56.655160    3504 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001446272s
	I1205 07:55:56.655263    3504 kubeadm.go:319] 
	I1205 07:55:56.655375    3504 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:55:56.655475    3504 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:55:56.655710    3504 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:55:56.655741    3504 kubeadm.go:319] 
	I1205 07:55:56.655926    3504 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:55:56.656132    3504 kubeadm.go:319] 
	I1205 07:55:56.656232    3504 kubeadm.go:403] duration metric: took 8m5.2264324s to StartCluster
	I1205 07:55:56.656382    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:55:56.660935    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:55:56.720992    3504 cri.go:89] found id: ""
	I1205 07:55:56.720992    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.720992    3504 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:55:56.720992    3504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:55:56.726101    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:55:56.779606    3504 cri.go:89] found id: ""
	I1205 07:55:56.779629    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.779629    3504 logs.go:284] No container was found matching "etcd"
	I1205 07:55:56.779681    3504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:55:56.783808    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:55:56.856128    3504 cri.go:89] found id: ""
	I1205 07:55:56.856232    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.856232    3504 logs.go:284] No container was found matching "coredns"
	I1205 07:55:56.856262    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:55:56.860617    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:55:56.903334    3504 cri.go:89] found id: ""
	I1205 07:55:56.903419    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.903419    3504 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:55:56.903419    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:55:56.907807    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:55:56.970846    3504 cri.go:89] found id: ""
	I1205 07:55:56.970898    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.970898    3504 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:55:56.970898    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:55:56.975641    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:55:57.023174    3504 cri.go:89] found id: ""
	I1205 07:55:57.023174    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.023174    3504 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:55:57.023174    3504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:55:57.027175    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:55:57.077156    3504 cri.go:89] found id: ""
	I1205 07:55:57.077156    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.077156    3504 logs.go:284] No container was found matching "kindnet"
	I1205 07:55:57.077156    3504 logs.go:123] Gathering logs for dmesg ...
	I1205 07:55:57.077156    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:55:57.117328    3504 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:55:57.117328    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:55:57.220104    3504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:55:57.221075    3504 logs.go:123] Gathering logs for Docker ...
	I1205 07:55:57.221075    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:55:57.251103    3504 logs.go:123] Gathering logs for container status ...
	I1205 07:55:57.251103    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:55:57.303905    3504 logs.go:123] Gathering logs for kubelet ...
	I1205 07:55:57.303905    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:55:57.367440    3504 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:55:57.367440    3504 out.go:285] * 
	W1205 07:55:57.367440    3504 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.367440    3504 out.go:285] * 
	W1205 07:55:57.369216    3504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:55:57.540920    3504 out.go:203] 
	W1205 07:55:57.554724    3504 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.554966    3504 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:55:57.554966    3504 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:55:57.597149    3504 out.go:203] 
	I1205 07:55:57.892052   11048 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (11.6963092s)
	I1205 07:55:57.892052   11048 kic.go:203] duration metric: took 11.7023081s to extract preloaded images to volume ...
	I1205 07:55:57.897048   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:58.164942   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:58.141964925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:58.167943   11048 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:55:58.420951   11048 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-218000 --name calico-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-218000 --network calico-218000 --ip 192.168.112.2 --volume calico-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	
	
	==> Docker <==
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204268162Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204356772Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204649702Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204658903Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204665404Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204692206Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204726910Z" level=info msg="Initializing buildkit"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.370721193Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379527304Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379697822Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379729725Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379786131Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:47:28 no-preload-104100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:55:59.825537   10981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:59.826543   10981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:59.828653   10981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:59.829887   10981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:59.830726   10981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +9.275386] tmpfs: Unknown parameter 'noswap'
	[Dec 5 07:54] tmpfs: Unknown parameter 'noswap'
	[Dec 5 07:55] CPU: 7 PID: 400141 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f946ada5b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f946ada5af6.
	[  +0.000001] RSP: 002b:00007fffd68862b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000004] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[ +15.228693] CPU: 5 PID: 402145 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8357565b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f8357565af6.
	[  +0.000001] RSP: 002b:00007ffe78e9c470 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:55:59 up  3:29,  0 user,  load average: 2.86, 3.84, 3.66
	Linux no-preload-104100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:55:56 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:57 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 07:55:57 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:57 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:57 no-preload-104100 kubelet[10780]: E1205 07:55:57.135934   10780 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:57 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:57 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:57 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 07:55:57 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:57 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:57 no-preload-104100 kubelet[10835]: E1205 07:55:57.891324   10835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:57 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:57 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:58 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 07:55:58 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:58 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:58 no-preload-104100 kubelet[10846]: E1205 07:55:58.624501   10846 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:58 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:58 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:55:59 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 05 07:55:59 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:59 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:55:59 no-preload-104100 kubelet[10875]: E1205 07:55:59.373906   10875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:55:59 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:55:59 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 6 (662.6701ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:56:00.811583    7844 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (528.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (537.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-042100 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-042100 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m53.2987432s)

                                                
                                                
-- stdout --
	* [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:52:52.679865    1056 out.go:360] Setting OutFile to fd 428 ...
	I1205 07:52:52.722866    1056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:52:52.722866    1056 out.go:374] Setting ErrFile to fd 1892...
	I1205 07:52:52.722866    1056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:52:52.737708    1056 out.go:368] Setting JSON to false
	I1205 07:52:52.741065    1056 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12430,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:52:52.741065    1056 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:52:52.750470    1056 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:52:52.753014    1056 notify.go:221] Checking for updates...
	I1205 07:52:52.755238    1056 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:52:52.757208    1056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:52:52.759871    1056 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:52:52.761996    1056 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:52:52.764524    1056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:52:52.767462    1056 config.go:182] Loaded profile config "default-k8s-diff-port-944500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:52:52.767462    1056 config.go:182] Loaded profile config "kubernetes-upgrade-863300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:52:52.768012    1056 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:52:52.768012    1056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:52:52.902312    1056 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:52:52.905313    1056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:52:53.152599    1056 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:52:53.130684134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:52:53.155548    1056 out.go:179] * Using the docker driver based on user configuration
	I1205 07:52:53.158552    1056 start.go:309] selected driver: docker
	I1205 07:52:53.158552    1056 start.go:927] validating driver "docker" against <nil>
	I1205 07:52:53.158552    1056 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:52:53.210442    1056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:52:53.464956    1056 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:52:53.449166273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:52:53.464956    1056 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1205 07:52:53.464956    1056 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 07:52:53.465957    1056 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:52:53.469958    1056 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 07:52:53.471956    1056 cni.go:84] Creating CNI manager for ""
	I1205 07:52:53.471956    1056 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:52:53.471956    1056 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 07:52:53.471956    1056 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:52:53.474956    1056 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 07:52:53.478956    1056 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:52:53.481956    1056 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:52:53.486532    1056 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:52:53.486532    1056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 07:52:53.535113    1056 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 07:52:53.563352    1056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:52:53.563394    1056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:52:53.831327    1056 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 07:52:53.831327    1056 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:52:53.831958    1056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json: {Name:mkc5ab0eaf1a0604f7912fece70ae6e57a928eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:52:53.831327    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:52:53.832832    1056 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:52:53.832832    1056 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:53.833435    1056 start.go:364] duration metric: took 603µs to acquireMachinesLock for "newest-cni-042100"
	I1205 07:52:53.833590    1056 start.go:93] Provisioning new machine with config: &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:52:53.833739    1056 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:52:53.838354    1056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:52:53.838449    1056 start.go:159] libmachine.API.Create for "newest-cni-042100" (driver="docker")
	I1205 07:52:53.838449    1056 client.go:173] LocalClient.Create starting
	I1205 07:52:53.839153    1056 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 07:52:53.839153    1056 main.go:143] libmachine: Decoding PEM data...
	I1205 07:52:53.839153    1056 main.go:143] libmachine: Parsing certificate...
	I1205 07:52:53.839153    1056 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 07:52:53.839855    1056 main.go:143] libmachine: Decoding PEM data...
	I1205 07:52:53.839855    1056 main.go:143] libmachine: Parsing certificate...
	I1205 07:52:53.847225    1056 cli_runner.go:164] Run: docker network inspect newest-cni-042100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:52:53.927888    1056 cli_runner.go:211] docker network inspect newest-cni-042100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:52:53.935948    1056 network_create.go:284] running [docker network inspect newest-cni-042100] to gather additional debugging logs...
	I1205 07:52:53.935948    1056 cli_runner.go:164] Run: docker network inspect newest-cni-042100
	W1205 07:52:54.157465    1056 cli_runner.go:211] docker network inspect newest-cni-042100 returned with exit code 1
	I1205 07:52:54.157465    1056 network_create.go:287] error running [docker network inspect newest-cni-042100]: docker network inspect newest-cni-042100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-042100 not found
	I1205 07:52:54.157465    1056 network_create.go:289] output of [docker network inspect newest-cni-042100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-042100 not found
	
	** /stderr **
	I1205 07:52:54.165465    1056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:52:54.269504    1056 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:52:54.301732    1056 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:52:54.330565    1056 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c01200}
	I1205 07:52:54.330565    1056 network_create.go:124] attempt to create docker network newest-cni-042100 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1205 07:52:54.336153    1056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-042100 newest-cni-042100
	W1205 07:52:55.093788    1056 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-042100 newest-cni-042100 returned with exit code 1
	W1205 07:52:55.093981    1056 network_create.go:149] failed to create docker network newest-cni-042100 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-042100 newest-cni-042100: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1205 07:52:55.093981    1056 network_create.go:116] failed to create docker network newest-cni-042100 192.168.67.0/24, will retry: subnet is taken
	I1205 07:52:55.241308    1056 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:52:55.289055    1056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c594a0}
	I1205 07:52:55.289055    1056 network_create.go:124] attempt to create docker network newest-cni-042100 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 07:52:55.296533    1056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-042100 newest-cni-042100
	I1205 07:52:55.795551    1056 network_create.go:108] docker network newest-cni-042100 192.168.76.0/24 created
	I1205 07:52:55.795804    1056 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-042100" container
	I1205 07:52:55.815457    1056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:52:55.893458    1056 cli_runner.go:164] Run: docker volume create newest-cni-042100 --label name.minikube.sigs.k8s.io=newest-cni-042100 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:52:55.972055    1056 oci.go:103] Successfully created a docker volume newest-cni-042100
	I1205 07:52:55.978058    1056 cli_runner.go:164] Run: docker run --rm --name newest-cni-042100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-042100 --entrypoint /usr/bin/test -v newest-cni-042100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:52:56.781590    1056 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.781659    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 07:52:56.781659    1056 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.9495403s
	I1205 07:52:56.781659    1056 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 07:52:56.783607    1056 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.784140    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 07:52:56.784221    1056 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9522162s
	I1205 07:52:56.784221    1056 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 07:52:56.811604    1056 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.811687    1056 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.811687    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 07:52:56.811687    1056 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9797297s
	I1205 07:52:56.811687    1056 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 07:52:56.811687    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 07:52:56.811687    1056 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.980313s
	I1205 07:52:56.812228    1056 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:52:56.819294    1056 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.819902    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:52:56.820077    1056 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 2.988703s
	I1205 07:52:56.820130    1056 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:52:56.825011    1056 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.825011    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 07:52:56.825011    1056 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.9936361s
	I1205 07:52:56.825011    1056 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 07:52:56.858218    1056 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.858218    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:52:56.858218    1056 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.026168s
	I1205 07:52:56.858218    1056 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:52:56.983385    1056 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:52:56.983762    1056 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:52:56.983934    1056 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.152521s
	I1205 07:52:56.983934    1056 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:52:56.983934    1056 cache.go:87] Successfully saved all images to host disk.
	I1205 07:52:57.507149    1056 cli_runner.go:217] Completed: docker run --rm --name newest-cni-042100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-042100 --entrypoint /usr/bin/test -v newest-cni-042100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.5290668s)
	I1205 07:52:57.507149    1056 oci.go:107] Successfully prepared a docker volume newest-cni-042100
	I1205 07:52:57.507149    1056 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:52:57.511553    1056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:52:57.773850    1056 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:52:57.749609848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:52:57.778638    1056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:52:58.033430    1056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-042100 --name newest-cni-042100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-042100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-042100 --network newest-cni-042100 --ip 192.168.76.2 --volume newest-cni-042100:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:52:58.814204    1056 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Running}}
	I1205 07:52:58.878755    1056 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 07:52:58.943768    1056 cli_runner.go:164] Run: docker exec newest-cni-042100 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:52:59.070457    1056 oci.go:144] the created container "newest-cni-042100" has a running status.
	I1205 07:52:59.070457    1056 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa...
	I1205 07:52:59.218458    1056 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:52:59.293434    1056 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 07:52:59.359288    1056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:52:59.359288    1056 kic_runner.go:114] Args: [docker exec --privileged newest-cni-042100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:52:59.528173    1056 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa...
	I1205 07:53:01.887732    1056 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 07:53:01.951718    1056 machine.go:94] provisionDockerMachine start ...
	I1205 07:53:01.957768    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:02.027773    1056 main.go:143] libmachine: Using SSH client type: native
	I1205 07:53:02.043663    1056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60996 <nil> <nil>}
	I1205 07:53:02.043663    1056 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:53:02.237955    1056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 07:53:02.237955    1056 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 07:53:02.243980    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:02.307640    1056 main.go:143] libmachine: Using SSH client type: native
	I1205 07:53:02.308661    1056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60996 <nil> <nil>}
	I1205 07:53:02.308661    1056 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 07:53:02.520093    1056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 07:53:02.524373    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:02.579365    1056 main.go:143] libmachine: Using SSH client type: native
	I1205 07:53:02.579956    1056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60996 <nil> <nil>}
	I1205 07:53:02.580012    1056 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:53:02.796240    1056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:53:02.796240    1056 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:53:02.796240    1056 ubuntu.go:190] setting up certificates
	I1205 07:53:02.796240    1056 provision.go:84] configureAuth start
	I1205 07:53:02.799972    1056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 07:53:02.859550    1056 provision.go:143] copyHostCerts
	I1205 07:53:02.860534    1056 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:53:02.860534    1056 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:53:02.860534    1056 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:53:02.861535    1056 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:53:02.861535    1056 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:53:02.861535    1056 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:53:02.862541    1056 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:53:02.862541    1056 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:53:02.862541    1056 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:53:02.862541    1056 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 07:53:03.086940    1056 provision.go:177] copyRemoteCerts
	I1205 07:53:03.091833    1056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:53:03.094796    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:03.150284    1056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60996 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 07:53:03.285937    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:53:03.318359    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:53:03.352939    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:53:03.381534    1056 provision.go:87] duration metric: took 585.2847ms to configureAuth
	I1205 07:53:03.381534    1056 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:53:03.382516    1056 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:53:03.390408    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:03.443988    1056 main.go:143] libmachine: Using SSH client type: native
	I1205 07:53:03.443988    1056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60996 <nil> <nil>}
	I1205 07:53:03.444993    1056 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 07:53:03.635821    1056 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 07:53:03.635821    1056 ubuntu.go:71] root file system type: overlay
	I1205 07:53:03.635821    1056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 07:53:03.638823    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:03.697830    1056 main.go:143] libmachine: Using SSH client type: native
	I1205 07:53:03.698818    1056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60996 <nil> <nil>}
	I1205 07:53:03.698818    1056 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 07:53:03.894891    1056 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 07:53:03.900247    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:03.970877    1056 main.go:143] libmachine: Using SSH client type: native
	I1205 07:53:03.971632    1056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 60996 <nil> <nil>}
	I1205 07:53:03.971687    1056 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 07:53:05.339597    1056 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 07:53:03.886023900 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 07:53:05.339701    1056 machine.go:97] duration metric: took 3.387929s to provisionDockerMachine
	I1205 07:53:05.339746    1056 client.go:176] duration metric: took 11.5011141s to LocalClient.Create
	I1205 07:53:05.339746    1056 start.go:167] duration metric: took 11.5011141s to libmachine.API.Create "newest-cni-042100"
	I1205 07:53:05.339796    1056 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 07:53:05.339796    1056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:53:05.346461    1056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:53:05.350480    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:05.409475    1056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60996 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 07:53:05.554093    1056 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:53:05.562557    1056 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:53:05.562557    1056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:53:05.562557    1056 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 07:53:05.563540    1056 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 07:53:05.563540    1056 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 07:53:05.570721    1056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:53:05.586390    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 07:53:05.619138    1056 start.go:296] duration metric: took 279.2172ms for postStartSetup
	I1205 07:53:05.625402    1056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 07:53:05.692991    1056 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 07:53:05.700984    1056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:53:05.703984    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:05.756012    1056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60996 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 07:53:05.894155    1056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:53:05.903029    1056 start.go:128] duration metric: took 12.0690411s to createHost
	I1205 07:53:05.903029    1056 start.go:83] releasing machines lock for "newest-cni-042100", held for 12.0693452s
	I1205 07:53:05.906477    1056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 07:53:05.966742    1056 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 07:53:05.971470    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:05.971470    1056 ssh_runner.go:195] Run: cat /version.json
	I1205 07:53:05.973805    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:06.021466    1056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60996 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 07:53:06.022471    1056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60996 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 07:53:06.152436    1056 ssh_runner.go:195] Run: systemctl --version
	W1205 07:53:06.155403    1056 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 07:53:06.169674    1056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:53:06.180005    1056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:53:06.184246    1056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1205 07:53:06.250648    1056 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 07:53:06.250717    1056 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 07:53:06.565879    1056 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:53:06.565949    1056 start.go:496] detecting cgroup driver to use...
	I1205 07:53:06.565991    1056 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:53:06.566174    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:53:06.638370    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:53:06.657818    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:53:06.674601    1056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:53:06.679555    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:53:06.699861    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:53:06.720027    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:53:06.741595    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:53:06.761897    1056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:53:06.780633    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:53:06.806193    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:53:06.832084    1056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:53:06.858507    1056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:53:06.875499    1056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:53:06.892491    1056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:53:07.040582    1056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:53:07.448866    1056 start.go:496] detecting cgroup driver to use...
	I1205 07:53:07.448866    1056 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:53:07.453376    1056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 07:53:07.485456    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:53:07.511806    1056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:53:07.579351    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:53:07.601353    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:53:07.624355    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:53:07.912958    1056 ssh_runner.go:195] Run: which cri-dockerd
	I1205 07:53:07.926387    1056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 07:53:07.940696    1056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 07:53:07.966278    1056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 07:53:08.136888    1056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 07:53:08.278599    1056 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 07:53:08.278599    1056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 07:53:08.308641    1056 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 07:53:08.333712    1056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:53:08.497899    1056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 07:53:09.930537    1056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4326153s)
	I1205 07:53:09.935531    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:53:09.957535    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 07:53:09.980536    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:53:10.002531    1056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 07:53:10.163489    1056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 07:53:10.324266    1056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:53:10.480196    1056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 07:53:10.511258    1056 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 07:53:10.533384    1056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:53:10.682055    1056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 07:53:10.789234    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:53:10.809024    1056 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 07:53:10.817265    1056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 07:53:10.825846    1056 start.go:564] Will wait 60s for crictl version
	I1205 07:53:10.832670    1056 ssh_runner.go:195] Run: which crictl
	I1205 07:53:10.843971    1056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:53:10.894627    1056 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 07:53:10.899392    1056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:53:10.951358    1056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:53:10.997105    1056 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 07:53:11.000751    1056 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 07:53:11.146956    1056 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 07:53:11.152450    1056 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 07:53:11.160517    1056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:53:11.251328    1056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 07:53:11.306454    1056 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:53:11.309344    1056 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:53:11.309344    1056 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:53:11.313300    1056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 07:53:11.348274    1056 docker.go:691] Got preloaded images: 
	I1205 07:53:11.348274    1056 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1205 07:53:11.348274    1056 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:53:11.360543    1056 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:53:11.367934    1056 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:53:11.371777    1056 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:53:11.372914    1056 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:53:11.376127    1056 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:53:11.377681    1056 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:53:11.381409    1056 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:53:11.381409    1056 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:53:11.388545    1056 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:53:11.389296    1056 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:53:11.395096    1056 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:53:11.398711    1056 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:53:11.400691    1056 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:53:11.401690    1056 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:53:11.404721    1056 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:53:11.410691    1056 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1205 07:53:11.445776    1056 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:53:11.494287    1056 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:53:11.546115    1056 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:53:11.600928    1056 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:53:11.652808    1056 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:53:11.706483    1056 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:53:11.758211    1056 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 07:53:11.812851    1056 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1205 07:53:11.914262    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:53:11.920445    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:53:11.930736    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:53:11.936156    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:53:11.959724    1056 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1205 07:53:11.959724    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:53:11.959724    1056 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:53:11.962573    1056 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1205 07:53:11.962636    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:53:11.962678    1056 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:53:11.965877    1056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:53:11.967219    1056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:53:11.970684    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 07:53:11.983101    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 07:53:11.983175    1056 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1205 07:53:11.983175    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:53:11.983175    1056 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:53:11.988478    1056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:53:11.998349    1056 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1205 07:53:11.998349    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:53:11.998349    1056 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:53:12.003062    1056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:53:12.028067    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:53:12.093339    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:53:12.093339    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:53:12.093493    1056 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1205 07:53:12.093493    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:53:12.093493    1056 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:53:12.100530    1056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:53:12.101341    1056 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1205 07:53:12.101472    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:53:12.101387    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:53:12.101573    1056 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:53:12.103837    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:53:12.103837    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:53:12.108000    1056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1205 07:53:12.109287    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:53:12.180433    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:53:12.187794    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:53:12.188809    1056 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1205 07:53:12.188809    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:53:12.188809    1056 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:53:12.193794    1056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:53:12.212635    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:53:12.212635    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:53:12.212635    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:53:12.212635    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1205 07:53:12.212635    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1205 07:53:12.215632    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:53:12.215684    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:53:12.215784    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:53:12.215863    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1205 07:53:12.215961    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1205 07:53:12.221107    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:53:12.225061    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:53:12.300831    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:53:12.306821    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:53:12.311831    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:53:12.311831    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1205 07:53:12.372822    1056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:53:12.411837    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:53:12.412830    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1205 07:53:12.441830    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:53:12.442833    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1205 07:53:12.509367    1056 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 07:53:12.509400    1056 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:53:12.509400    1056 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:53:12.515680    1056 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:53:12.646041    1056 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:53:12.646041    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1205 07:53:12.725038    1056 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:53:12.731043    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:53:12.928043    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1205 07:53:12.929052    1056 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:53:12.929052    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1205 07:53:13.265827    1056 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:53:13.265827    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1205 07:53:16.085292    1056 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (2.8194199s)
	I1205 07:53:16.085292    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:53:16.085292    1056 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:53:16.085292    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1205 07:53:17.408110    1056 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (1.3227963s)
	I1205 07:53:17.408110    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:53:17.408110    1056 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:53:17.408110    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1205 07:53:20.064634    1056 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (2.6564315s)
	I1205 07:53:20.064712    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:53:20.064765    1056 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:53:20.064765    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1205 07:53:30.490384    1056 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (10.4254534s)
	I1205 07:53:30.490384    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1205 07:53:30.490384    1056 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:53:30.490384    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1205 07:53:32.134367    1056 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.6439569s)
	I1205 07:53:32.134367    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:53:32.134367    1056 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:53:32.134367    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1205 07:53:35.539929    1056 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (3.4055074s)
	I1205 07:53:35.539992    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1205 07:53:35.539992    1056 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:53:35.539992    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1205 07:53:37.087580    1056 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.5475633s)
	I1205 07:53:37.087580    1056 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1205 07:53:37.087580    1056 cache_images.go:125] Successfully loaded all cached images
	I1205 07:53:37.087580    1056 cache_images.go:94] duration metric: took 25.7388967s to LoadCachedImages
	I1205 07:53:37.087580    1056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 07:53:37.087580    1056 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:53:37.091583    1056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 07:53:37.185012    1056 cni.go:84] Creating CNI manager for ""
	I1205 07:53:37.185065    1056 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:53:37.185065    1056 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:53:37.185065    1056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:53:37.185065    1056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:53:37.192200    1056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:53:37.205654    1056 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:53:37.211229    1056 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:53:37.224306    1056 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1205 07:53:37.224306    1056 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1205 07:53:37.224306    1056 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1205 07:53:37.230919    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:53:37.230967    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:53:37.231558    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:53:37.239001    1056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:53:37.239001    1056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:53:37.239001    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1205 07:53:37.239001    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1205 07:53:37.262728    1056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:53:37.330618    1056 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:53:37.330618    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1205 07:53:39.117437    1056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:53:39.131434    1056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 07:53:39.151071    1056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:53:39.174135    1056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 07:53:39.200810    1056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:53:39.208038    1056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:53:39.228186    1056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:53:39.371938    1056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:53:39.396440    1056 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 07:53:39.396440    1056 certs.go:195] generating shared ca certs ...
	I1205 07:53:39.396440    1056 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:53:39.397789    1056 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 07:53:39.397789    1056 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 07:53:39.397789    1056 certs.go:257] generating profile certs ...
	I1205 07:53:39.398454    1056 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 07:53:39.398454    1056 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.crt with IP's: []
	I1205 07:53:39.535023    1056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.crt ...
	I1205 07:53:39.535023    1056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.crt: {Name:mk315701082a0248e9ac9c4ee62fe83e1c265ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:53:39.535989    1056 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key ...
	I1205 07:53:39.535989    1056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key: {Name:mk4d178388fadce0cf54ae72dc5029cd877940d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:53:39.536899    1056 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 07:53:39.536899    1056 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt.d01368e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:53:39.742574    1056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt.d01368e3 ...
	I1205 07:53:39.742574    1056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt.d01368e3: {Name:mk5ace6e252769ff68893aadc12ff098aeb21cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:53:39.743592    1056 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3 ...
	I1205 07:53:39.743592    1056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3: {Name:mk20607a95b86dafb4703976748e29e3667fa28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:53:39.744621    1056 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt.d01368e3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt
	I1205 07:53:39.757443    1056 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key
	I1205 07:53:39.758421    1056 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 07:53:39.758421    1056 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt with IP's: []
	I1205 07:53:39.827655    1056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt ...
	I1205 07:53:39.827655    1056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt: {Name:mk124447a38aaed41ac2913a27a5ff6367d060ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:53:39.828693    1056 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key ...
	I1205 07:53:39.828693    1056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key: {Name:mka02aec504d409d5990041131c4de1b1cabbbb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:53:39.842677    1056 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 07:53:39.842677    1056 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 07:53:39.842677    1056 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 07:53:39.843210    1056 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 07:53:39.843386    1056 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 07:53:39.843519    1056 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 07:53:39.843519    1056 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 07:53:39.845051    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:53:39.877495    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:53:39.906471    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:53:39.939448    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:53:39.969411    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:53:40.000125    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 07:53:40.031284    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:53:40.063551    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:53:40.096387    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 07:53:40.129173    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 07:53:40.160338    1056 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:53:40.193162    1056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:53:40.219215    1056 ssh_runner.go:195] Run: openssl version
	I1205 07:53:40.234017    1056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 07:53:40.253415    1056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 07:53:40.276033    1056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 07:53:40.285664    1056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 07:53:40.289066    1056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 07:53:40.338842    1056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:53:40.358866    1056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/80362.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:53:40.381284    1056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:53:40.401905    1056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:53:40.420967    1056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:53:40.428496    1056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:53:40.432695    1056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:53:40.481450    1056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:53:40.500509    1056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:53:40.522109    1056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 07:53:40.539613    1056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 07:53:40.558357    1056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 07:53:40.569609    1056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 07:53:40.572600    1056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 07:53:40.623046    1056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:53:40.640861    1056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8036.pem /etc/ssl/certs/51391683.0
	I1205 07:53:40.659356    1056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:53:40.668315    1056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:53:40.668458    1056 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:53:40.672291    1056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 07:53:40.705271    1056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:53:40.722106    1056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:53:40.736187    1056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:53:40.740161    1056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:53:40.754435    1056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:53:40.754435    1056 kubeadm.go:158] found existing configuration files:
	
	I1205 07:53:40.758958    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:53:40.775342    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:53:40.779844    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:53:40.797195    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:53:40.811976    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:53:40.816864    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:53:40.836798    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:53:40.850903    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:53:40.854906    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:53:40.873985    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:53:40.889183    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:53:40.893354    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:53:40.909349    1056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:53:41.029033    1056 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 07:53:41.118618    1056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:53:41.244576    1056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:57:43.366096    1056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:57:43.366628    1056 kubeadm.go:319] 
	I1205 07:57:43.366787    1056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:57:43.379391    1056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:57:43.379473    1056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:57:43.379781    1056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:57:43.379781    1056 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:57:43.379781    1056 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:57:43.379781    1056 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:57:43.380311    1056 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:57:43.380645    1056 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:57:43.380857    1056 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:57:43.380991    1056 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:57:43.381189    1056 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:57:43.381362    1056 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:57:43.381560    1056 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:57:43.381707    1056 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:57:43.381837    1056 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:57:43.382028    1056 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:57:43.382245    1056 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:57:43.382347    1056 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:57:43.382618    1056 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:57:43.382835    1056 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:57:43.383055    1056 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:57:43.383180    1056 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:57:43.383339    1056 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:57:43.383489    1056 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:57:43.383707    1056 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:57:43.383876    1056 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:57:43.384025    1056 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:57:43.384206    1056 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:57:43.384329    1056 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:57:43.384466    1056 kubeadm.go:319] OS: Linux
	I1205 07:57:43.384623    1056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:57:43.384846    1056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:57:43.385211    1056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:57:43.385336    1056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:57:43.385482    1056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:57:43.385599    1056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:57:43.385724    1056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:57:43.385789    1056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:57:43.385871    1056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:57:43.386072    1056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:57:43.386146    1056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:57:43.386146    1056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:57:43.386146    1056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:57:43.391373    1056 out.go:252]   - Generating certificates and keys ...
	I1205 07:57:43.391581    1056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:57:43.391662    1056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:57:43.391662    1056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:57:43.391662    1056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:57:43.391662    1056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:57:43.391662    1056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:57:43.392277    1056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:57:43.392534    1056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-042100] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:57:43.392684    1056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:57:43.392956    1056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-042100] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:57:43.392956    1056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:57:43.392956    1056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:57:43.392956    1056 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:57:43.393519    1056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:57:43.393622    1056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:57:43.393823    1056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:57:43.393951    1056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:57:43.394125    1056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:57:43.394167    1056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:57:43.394167    1056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:57:43.394167    1056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:57:43.397353    1056 out.go:252]   - Booting up control plane ...
	I1205 07:57:43.397411    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:57:43.397411    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:57:43.397411    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:57:43.397970    1056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:57:43.397970    1056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:57:43.397970    1056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:57:43.398535    1056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:57:43.398535    1056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:57:43.398535    1056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:57:43.398535    1056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:57:43.399121    1056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187836s
	I1205 07:57:43.399121    1056 kubeadm.go:319] 
	I1205 07:57:43.399121    1056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:57:43.399121    1056 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:57:43.399121    1056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:57:43.399121    1056 kubeadm.go:319] 
	I1205 07:57:43.399690    1056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:57:43.399690    1056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:57:43.399690    1056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:57:43.399690    1056 kubeadm.go:319] 
	W1205 07:57:43.399690    1056 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-042100] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-042100] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187836s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-042100] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-042100] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187836s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:57:43.404364    1056 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 07:57:43.883697    1056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:57:43.903197    1056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:57:43.907757    1056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:57:43.921888    1056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:57:43.921888    1056 kubeadm.go:158] found existing configuration files:
	
	I1205 07:57:43.927375    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:57:43.943723    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:57:43.948580    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:57:43.970956    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:57:43.988275    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:57:43.993534    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:57:44.018215    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:57:44.031455    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:57:44.036283    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:57:44.059165    1056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:57:44.074321    1056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:57:44.078332    1056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:57:44.095322    1056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:57:44.218330    1056 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 07:57:44.310106    1056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:57:44.422477    1056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 08:01:45.188319    1056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 08:01:45.188319    1056 kubeadm.go:319] 
	I1205 08:01:45.188319    1056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 08:01:45.191322    1056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 08:01:45.192319    1056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 08:01:45.192319    1056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 08:01:45.192319    1056 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_INET: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] OS: Linux
	I1205 08:01:45.195329    1056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 08:01:45.197312    1056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 08:01:45.197312    1056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 08:01:45.197312    1056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 08:01:45.200324    1056 out.go:252]   - Generating certificates and keys ...
	I1205 08:01:45.200324    1056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 08:01:45.200324    1056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 08:01:45.202312    1056 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 08:01:45.202312    1056 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 08:01:45.202312    1056 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 08:01:45.203321    1056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 08:01:45.203321    1056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 08:01:45.203321    1056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 08:01:45.203321    1056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 08:01:45.207317    1056 out.go:252]   - Booting up control plane ...
	I1205 08:01:45.207317    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 08:01:45.207317    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 08:01:45.209322    1056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000992736s
	I1205 08:01:45.209322    1056 kubeadm.go:319] 
	I1205 08:01:45.209322    1056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- The kubelet is not running
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 08:01:45.210317    1056 kubeadm.go:319] 
	I1205 08:01:45.210317    1056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 08:01:45.210317    1056 kubeadm.go:319] 
	I1205 08:01:45.210317    1056 kubeadm.go:403] duration metric: took 8m4.5341682s to StartCluster
	I1205 08:01:45.210317    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 08:01:45.214317    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 08:01:45.280016    1056 cri.go:89] found id: ""
	I1205 08:01:45.280016    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.280016    1056 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:01:45.280016    1056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 08:01:45.284017    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 08:01:45.326531    1056 cri.go:89] found id: ""
	I1205 08:01:45.326531    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.326531    1056 logs.go:284] No container was found matching "etcd"
	I1205 08:01:45.326531    1056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 08:01:45.332138    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 08:01:45.377345    1056 cri.go:89] found id: ""
	I1205 08:01:45.377438    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.377438    1056 logs.go:284] No container was found matching "coredns"
	I1205 08:01:45.377562    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 08:01:45.382104    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 08:01:45.425147    1056 cri.go:89] found id: ""
	I1205 08:01:45.425147    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.425147    1056 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:01:45.425147    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 08:01:45.429455    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 08:01:45.478730    1056 cri.go:89] found id: ""
	I1205 08:01:45.478730    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.478730    1056 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:01:45.478730    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 08:01:45.482728    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 08:01:45.533489    1056 cri.go:89] found id: ""
	I1205 08:01:45.533489    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.533489    1056 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:01:45.533489    1056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 08:01:45.538462    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 08:01:45.588632    1056 cri.go:89] found id: ""
	I1205 08:01:45.588632    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.588632    1056 logs.go:284] No container was found matching "kindnet"
	I1205 08:01:45.588632    1056 logs.go:123] Gathering logs for kubelet ...
	I1205 08:01:45.588632    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:01:45.650205    1056 logs.go:123] Gathering logs for dmesg ...
	I1205 08:01:45.650205    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:01:45.690570    1056 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:01:45.690570    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:01:45.774146    1056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:01:45.763926   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.764746   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767033   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767902   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.770073   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:01:45.763926   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.764746   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767033   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767902   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.770073   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:01:45.774146    1056 logs.go:123] Gathering logs for Docker ...
	I1205 08:01:45.774146    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:01:45.807714    1056 logs.go:123] Gathering logs for container status ...
	I1205 08:01:45.807714    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 08:01:45.862103    1056 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 08:01:45.862191    1056 out.go:285] * 
	* 
	W1205 08:01:45.862283    1056 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 08:01:45.862283    1056 out.go:285] * 
	* 
	W1205 08:01:45.864148    1056 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:01:45.868140    1056 out.go:203] 
	W1205 08:01:45.871129    1056 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 08:01:45.872135    1056 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 08:01:45.872135    1056 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 08:01:45.875127    1056 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-042100 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042100
helpers_test.go:243: (dbg) docker inspect newest-cni-042100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619",
	        "Created": "2025-12-05T07:52:58.091352749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:52:58.409795785Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hosts",
	        "LogPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619-json.log",
	        "Name": "/newest-cni-042100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-042100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042100",
	                "Source": "/var/lib/docker/volumes/newest-cni-042100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042100",
	                "name.minikube.sigs.k8s.io": "newest-cni-042100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d47f853c8e70c6c1bc253dda5cf25981c875d7148f5ef4b552fe47fc0978269",
	            "SandboxKey": "/var/run/docker/netns/5d47f853c8e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60997"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60999"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61000"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "174359b7b50b3bec7b4847d3ab43850e80d128f01a95736675cb3ceba87aab04",
	                    "EndpointID": "bfc06a82bdc1be8e4c759d8c79c5b8e1403b9190ee5a6b321c993ee5e273b5dc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042100",
	                        "ee0c9d80d83a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100: exit status 6 (622.1993ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 08:01:47.026129    7652 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25: (1.8711345s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p false-218000 sudo journalctl -xeu kubelet --all --full --no-pager                                           │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cat /etc/kubernetes/kubelet.conf                                                          │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cat /var/lib/kubelet/config.yaml                                                          │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo systemctl status docker --all --full --no-pager                                           │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo systemctl cat docker --no-pager                                                           │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cat /etc/docker/daemon.json                                                               │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo docker system info                                                                        │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo systemctl status cri-docker --all --full --no-pager                                       │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo systemctl cat cri-docker --no-pager                                                       │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                  │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                            │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cri-dockerd --version                                                                     │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo systemctl status containerd --all --full --no-pager                                       │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo systemctl cat containerd --no-pager                                                       │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cat /lib/systemd/system/containerd.service                                                │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo cat /etc/containerd/config.toml                                                           │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo containerd config dump                                                                    │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo systemctl status crio --all --full --no-pager                                             │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │                     │
	│ ssh     │ -p false-218000 sudo systemctl cat crio --no-pager                                                             │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                   │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ ssh     │ -p false-218000 sudo crio config                                                                               │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ delete  │ -p false-218000                                                                                                │ false-218000              │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:00 UTC │
	│ start   │ -p flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:00 UTC │ 05 Dec 25 08:01 UTC │
	│ ssh     │ -p enable-default-cni-218000 pgrep -a kubelet                                                                  │ enable-default-cni-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:01 UTC │ 05 Dec 25 08:01 UTC │
	│ ssh     │ -p flannel-218000 pgrep -a kubelet                                                                             │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:01 UTC │ 05 Dec 25 08:01 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 08:00:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:00:27.362214   10268 out.go:360] Setting OutFile to fd 1660 ...
	I1205 08:00:27.406219   10268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:00:27.406219   10268 out.go:374] Setting ErrFile to fd 1500...
	I1205 08:00:27.406219   10268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:00:27.422235   10268 out.go:368] Setting JSON to false
	I1205 08:00:27.425397   10268 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12885,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:00:27.425397   10268 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:00:27.429491   10268 out.go:179] * [flannel-218000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:00:27.432795   10268 notify.go:221] Checking for updates...
	I1205 08:00:27.435195   10268 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:00:27.437587   10268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:00:27.440481   10268 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:00:27.442801   10268 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:00:27.444812   10268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:00:27.446541   10268 config.go:182] Loaded profile config "enable-default-cni-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:00:27.447933   10268 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:00:27.447933   10268 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:00:27.447933   10268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:00:27.577614   10268 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:00:27.582222   10268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:00:27.820575   10268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:00:27.798636911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:00:27.827957   10268 out.go:179] * Using the docker driver based on user configuration
	I1205 08:00:27.830957   10268 start.go:309] selected driver: docker
	I1205 08:00:27.830957   10268 start.go:927] validating driver "docker" against <nil>
	I1205 08:00:27.830957   10268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:00:27.871509   10268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:00:28.105692   10268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:00:28.084823544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:00:28.105692   10268 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 08:00:28.106688   10268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:00:28.109689   10268 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 08:00:28.111687   10268 cni.go:84] Creating CNI manager for "flannel"
	I1205 08:00:28.111687   10268 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1205 08:00:28.111687   10268 start.go:353] cluster config:
	{Name:flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:00:28.114687   10268 out.go:179] * Starting "flannel-218000" primary control-plane node in "flannel-218000" cluster
	I1205 08:00:28.115687   10268 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:00:28.118691   10268 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:00:28.121687   10268 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 08:00:28.121687   10268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 08:00:28.121687   10268 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1205 08:00:28.121687   10268 cache.go:65] Caching tarball of preloaded images
	I1205 08:00:28.121687   10268 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1205 08:00:28.121687   10268 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1205 08:00:28.122688   10268 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\config.json ...
	I1205 08:00:28.122688   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\config.json: {Name:mke5ee31272e44eda34f26fe740637f610f3fd87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:28.199731   10268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:00:28.199731   10268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 08:00:28.199731   10268 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:00:28.199731   10268 start.go:360] acquireMachinesLock for flannel-218000: {Name:mk84d6aaed88e633e0e71e3d4230b3298144a0d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:00:28.199731   10268 start.go:364] duration metric: took 0s to acquireMachinesLock for "flannel-218000"
	I1205 08:00:28.199731   10268 start.go:93] Provisioning new machine with config: &{Name:flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:00:28.200457   10268 start.go:125] createHost starting for "" (driver="docker")
	I1205 08:00:24.987678    8928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:00:25.009420    8928 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000 for IP: 192.168.85.2
	I1205 08:00:25.009441    8928 certs.go:195] generating shared ca certs ...
	I1205 08:00:25.009441    8928 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:25.009441    8928 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:00:25.010195    8928 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:00:25.010342    8928 certs.go:257] generating profile certs ...
	I1205 08:00:25.010731    8928 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\client.key
	I1205 08:00:25.010831    8928 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\client.crt with IP's: []
	I1205 08:00:25.230318    8928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\client.crt ...
	I1205 08:00:25.230318    8928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\client.crt: {Name:mk2cc34fb6e6b5ca513d4e2bd8dc632522048b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:25.230529    8928 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\client.key ...
	I1205 08:00:25.230529    8928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\client.key: {Name:mk151ef14316eb40e144f616edf6092c5655331c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:25.231576    8928 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.key.ee3eb475
	I1205 08:00:25.231576    8928 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.crt.ee3eb475 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 08:00:25.375273    8928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.crt.ee3eb475 ...
	I1205 08:00:25.375273    8928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.crt.ee3eb475: {Name:mk6d7b9697d979c7ef152422aa2333dd62a99cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:25.376479    8928 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.key.ee3eb475 ...
	I1205 08:00:25.376479    8928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.key.ee3eb475: {Name:mkbbba27a12189642b4299d48752e27c5c521e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:25.377368    8928 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.crt.ee3eb475 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.crt
	I1205 08:00:25.390162    8928 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.key.ee3eb475 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.key
	I1205 08:00:25.390878    8928 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.key
	I1205 08:00:25.390878    8928 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.crt with IP's: []
	I1205 08:00:25.428767    8928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.crt ...
	I1205 08:00:25.429806    8928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.crt: {Name:mka5382e2e37cb32edf7946b2c4ed7fa684d02ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:25.430227    8928 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.key ...
	I1205 08:00:25.430227    8928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.key: {Name:mke7f6726903110edc232ef6233229126e1d971e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:25.442950    8928 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:00:25.443716    8928 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:00:25.443716    8928 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:00:25.444063    8928 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:00:25.444391    8928 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:00:25.444653    8928 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:00:25.444915    8928 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:00:25.446327    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:00:25.669765    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:00:25.698090    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:00:25.724544    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:00:25.753326    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 08:00:25.784728    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 08:00:25.816581    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:00:25.848754    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-218000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:00:25.882927    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:00:25.919786    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:00:25.949222    8928 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:00:26.055741    8928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:00:26.084443    8928 ssh_runner.go:195] Run: openssl version
	I1205 08:00:26.101907    8928 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:00:26.119304    8928 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:00:26.135301    8928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:00:26.142297    8928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:00:26.146295    8928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:00:26.195702    8928 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:00:26.214586    8928 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/80362.pem /etc/ssl/certs/3ec20f2e.0
	I1205 08:00:26.233376    8928 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:26.253568    8928 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:00:26.271402    8928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:26.279454    8928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:26.283445    8928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:26.334056    8928 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:00:26.351562    8928 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 08:00:26.370077    8928 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:00:26.386715    8928 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:00:26.403671    8928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:00:26.413266    8928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:00:26.417510    8928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:00:26.470216    8928 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:00:26.496854    8928 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8036.pem /etc/ssl/certs/51391683.0
	I1205 08:00:26.516414    8928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:00:26.525137    8928 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 08:00:26.526200    8928 kubeadm.go:401] StartCluster: {Name:enable-default-cni-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-218000 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:00:26.530889    8928 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:00:26.572526    8928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:00:26.591524    8928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 08:00:26.604526    8928 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 08:00:26.608525    8928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 08:00:26.621517    8928 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 08:00:26.621517    8928 kubeadm.go:158] found existing configuration files:
	
	I1205 08:00:26.625514    8928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 08:00:26.639516    8928 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 08:00:26.643516    8928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 08:00:27.048953    8928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 08:00:27.063913    8928 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 08:00:27.068316    8928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 08:00:27.087961    8928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 08:00:27.101790    8928 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 08:00:27.106112    8928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 08:00:27.126148    8928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 08:00:27.138150    8928 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 08:00:27.142149    8928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 08:00:27.159164    8928 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 08:00:27.288849    8928 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 08:00:27.294318    8928 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 08:00:27.404888    8928 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1205 08:00:29.554112    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:00:28.206671   10268 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 08:00:28.206826   10268 start.go:159] libmachine.API.Create for "flannel-218000" (driver="docker")
	I1205 08:00:28.206826   10268 client.go:173] LocalClient.Create starting
	I1205 08:00:28.207394   10268 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 08:00:28.207620   10268 main.go:143] libmachine: Decoding PEM data...
	I1205 08:00:28.207680   10268 main.go:143] libmachine: Parsing certificate...
	I1205 08:00:28.207827   10268 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 08:00:28.207971   10268 main.go:143] libmachine: Decoding PEM data...
	I1205 08:00:28.207971   10268 main.go:143] libmachine: Parsing certificate...
	I1205 08:00:28.212314   10268 cli_runner.go:164] Run: docker network inspect flannel-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 08:00:28.262414   10268 cli_runner.go:211] docker network inspect flannel-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 08:00:28.266410   10268 network_create.go:284] running [docker network inspect flannel-218000] to gather additional debugging logs...
	I1205 08:00:28.266410   10268 cli_runner.go:164] Run: docker network inspect flannel-218000
	W1205 08:00:28.318147   10268 cli_runner.go:211] docker network inspect flannel-218000 returned with exit code 1
	I1205 08:00:28.318304   10268 network_create.go:287] error running [docker network inspect flannel-218000]: docker network inspect flannel-218000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-218000 not found
	I1205 08:00:28.318304   10268 network_create.go:289] output of [docker network inspect flannel-218000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-218000 not found
	
	** /stderr **
	I1205 08:00:28.321546   10268 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 08:00:28.399955   10268 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:00:28.415483   10268 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:00:28.431314   10268 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:00:28.461058   10268 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:00:28.476861   10268 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:00:28.489961   10268 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016891a0}
	I1205 08:00:28.489961   10268 network_create.go:124] attempt to create docker network flannel-218000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1205 08:00:28.493286   10268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-218000 flannel-218000
	I1205 08:00:28.636586   10268 network_create.go:108] docker network flannel-218000 192.168.94.0/24 created
	I1205 08:00:28.636586   10268 kic.go:121] calculated static IP "192.168.94.2" for the "flannel-218000" container
	I1205 08:00:28.644445   10268 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 08:00:28.703165   10268 cli_runner.go:164] Run: docker volume create flannel-218000 --label name.minikube.sigs.k8s.io=flannel-218000 --label created_by.minikube.sigs.k8s.io=true
	I1205 08:00:28.763815   10268 oci.go:103] Successfully created a docker volume flannel-218000
	I1205 08:00:28.768232   10268 cli_runner.go:164] Run: docker run --rm --name flannel-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-218000 --entrypoint /usr/bin/test -v flannel-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 08:00:30.138589   10268 cli_runner.go:217] Completed: docker run --rm --name flannel-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-218000 --entrypoint /usr/bin/test -v flannel-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.3702801s)
	I1205 08:00:30.138589   10268 oci.go:107] Successfully prepared a docker volume flannel-218000
	I1205 08:00:30.138589   10268 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 08:00:30.139121   10268 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 08:00:30.142809   10268 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v flannel-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1205 08:00:39.588786    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:00:41.394819   10268 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v flannel-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (11.2518313s)
	I1205 08:00:41.394819   10268 kic.go:203] duration metric: took 11.2555189s to extract preloaded images to volume ...
	I1205 08:00:41.399575   10268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:00:41.652019   10268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:00:41.629192533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:00:41.656032   10268 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 08:00:41.925937   10268 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-218000 --name flannel-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-218000 --network flannel-218000 --ip 192.168.94.2 --volume flannel-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 08:00:42.734486   10268 cli_runner.go:164] Run: docker container inspect flannel-218000 --format={{.State.Running}}
	I1205 08:00:42.797258   10268 cli_runner.go:164] Run: docker container inspect flannel-218000 --format={{.State.Status}}
	I1205 08:00:42.861274   10268 cli_runner.go:164] Run: docker exec flannel-218000 stat /var/lib/dpkg/alternatives/iptables
	I1205 08:00:42.979288   10268 oci.go:144] the created container "flannel-218000" has a running status.
	I1205 08:00:42.979288   10268 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa...
	I1205 08:00:43.116953   10268 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 08:00:43.205074   10268 cli_runner.go:164] Run: docker container inspect flannel-218000 --format={{.State.Status}}
	I1205 08:00:43.272384   10268 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 08:00:43.272384   10268 kic_runner.go:114] Args: [docker exec --privileged flannel-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 08:00:43.392465   10268 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa...
	I1205 08:00:45.615930   10268 cli_runner.go:164] Run: docker container inspect flannel-218000 --format={{.State.Status}}
	I1205 08:00:45.666938   10268 machine.go:94] provisionDockerMachine start ...
	I1205 08:00:45.669924   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:45.721927   10268 main.go:143] libmachine: Using SSH client type: native
	I1205 08:00:45.736124   10268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62109 <nil> <nil>}
	I1205 08:00:45.736124   10268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:00:45.935334   10268 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-218000
	
	I1205 08:00:45.935334   10268 ubuntu.go:182] provisioning hostname "flannel-218000"
	I1205 08:00:45.942091   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:45.999602   10268 main.go:143] libmachine: Using SSH client type: native
	I1205 08:00:45.999602   10268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62109 <nil> <nil>}
	I1205 08:00:45.999602   10268 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-218000 && echo "flannel-218000" | sudo tee /etc/hostname
	I1205 08:00:46.199810   10268 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-218000
	
	I1205 08:00:46.204096   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:46.269442   10268 main.go:143] libmachine: Using SSH client type: native
	I1205 08:00:46.269713   10268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62109 <nil> <nil>}
	I1205 08:00:46.269713   10268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:00:46.457526   10268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:00:46.457526   10268 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:00:46.457526   10268 ubuntu.go:190] setting up certificates
	I1205 08:00:46.457526   10268 provision.go:84] configureAuth start
	I1205 08:00:46.461241   10268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-218000
	I1205 08:00:46.515381   10268 provision.go:143] copyHostCerts
	I1205 08:00:46.515381   10268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:00:46.515381   10268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:00:46.515381   10268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:00:46.516386   10268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:00:46.516386   10268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:00:46.516386   10268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:00:46.517383   10268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:00:46.517383   10268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:00:46.517383   10268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:00:46.518386   10268 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.flannel-218000 san=[127.0.0.1 192.168.94.2 flannel-218000 localhost minikube]
	I1205 08:00:46.586970   10268 provision.go:177] copyRemoteCerts
	I1205 08:00:46.591727   10268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:00:46.595328   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:46.653663   10268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62109 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa Username:docker}
	I1205 08:00:46.791952   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1205 08:00:46.824009   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:00:46.856754   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:00:46.890071   10268 provision.go:87] duration metric: took 432.538ms to configureAuth
	I1205 08:00:46.891553   10268 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:00:46.893130   10268 config.go:182] Loaded profile config "flannel-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:00:46.895726   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:46.954149   10268 main.go:143] libmachine: Using SSH client type: native
	I1205 08:00:46.954307   10268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62109 <nil> <nil>}
	I1205 08:00:46.954307   10268 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:00:47.147486   10268 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:00:47.147545   10268 ubuntu.go:71] root file system type: overlay
	I1205 08:00:47.147708   10268 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:00:47.151394   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:47.209689   10268 main.go:143] libmachine: Using SSH client type: native
	I1205 08:00:47.210350   10268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62109 <nil> <nil>}
	I1205 08:00:47.210559   10268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:00:50.154185    8928 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 08:00:50.154185    8928 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 08:00:50.155266    8928 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 08:00:50.155266    8928 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 08:00:50.155800    8928 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 08:00:50.155927    8928 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 08:00:50.159402    8928 out.go:252]   - Generating certificates and keys ...
	I1205 08:00:50.159595    8928 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 08:00:50.159834    8928 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 08:00:50.159979    8928 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 08:00:50.160134    8928 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 08:00:50.160475    8928 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 08:00:50.160627    8928 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 08:00:50.160799    8928 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 08:00:50.161234    8928 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-218000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 08:00:50.161362    8928 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 08:00:50.161699    8928 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-218000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 08:00:50.161929    8928 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 08:00:50.162168    8928 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 08:00:50.162343    8928 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 08:00:50.162343    8928 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 08:00:50.162343    8928 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 08:00:50.162343    8928 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 08:00:50.162343    8928 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 08:00:50.163076    8928 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 08:00:50.163184    8928 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 08:00:50.163184    8928 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 08:00:50.163184    8928 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 08:00:50.169463    8928 out.go:252]   - Booting up control plane ...
	I1205 08:00:50.169463    8928 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 08:00:50.169463    8928 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 08:00:50.169463    8928 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 08:00:50.170503    8928 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 08:00:50.170503    8928 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 08:00:50.170503    8928 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 08:00:50.170503    8928 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 08:00:50.170503    8928 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 08:00:50.171476    8928 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 08:00:50.171476    8928 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 08:00:50.171476    8928 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501813231s
	I1205 08:00:50.171476    8928 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 08:00:50.171476    8928 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1205 08:00:50.172479    8928 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 08:00:50.172479    8928 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 08:00:50.172479    8928 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 11.716484779s
	I1205 08:00:50.172479    8928 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 11.981110487s
	I1205 08:00:50.172479    8928 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.002832809s
	I1205 08:00:50.173474    8928 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 08:00:50.173474    8928 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 08:00:50.173474    8928 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 08:00:50.173474    8928 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-218000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 08:00:50.173474    8928 kubeadm.go:319] [bootstrap-token] Using token: lv88co.p4fwv5zc13lrs8mz
	I1205 08:00:50.177461    8928 out.go:252]   - Configuring RBAC rules ...
	I1205 08:00:50.177461    8928 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 08:00:50.178461    8928 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 08:00:50.178461    8928 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 08:00:50.178461    8928 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 08:00:50.178461    8928 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 08:00:50.178461    8928 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 08:00:50.179460    8928 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 08:00:50.179460    8928 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 08:00:50.179460    8928 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 08:00:50.179460    8928 kubeadm.go:319] 
	I1205 08:00:50.179460    8928 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 08:00:50.179460    8928 kubeadm.go:319] 
	I1205 08:00:50.179460    8928 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 08:00:50.179460    8928 kubeadm.go:319] 
	I1205 08:00:50.179460    8928 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 08:00:50.179460    8928 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 08:00:50.179460    8928 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 08:00:50.179460    8928 kubeadm.go:319] 
	I1205 08:00:50.179460    8928 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 08:00:50.179460    8928 kubeadm.go:319] 
	I1205 08:00:50.180468    8928 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 08:00:50.180468    8928 kubeadm.go:319] 
	I1205 08:00:50.180468    8928 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 08:00:50.180468    8928 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 08:00:50.180468    8928 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 08:00:50.180468    8928 kubeadm.go:319] 
	I1205 08:00:50.180468    8928 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 08:00:50.180468    8928 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 08:00:50.180468    8928 kubeadm.go:319] 
	I1205 08:00:50.181478    8928 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lv88co.p4fwv5zc13lrs8mz \
	I1205 08:00:50.181478    8928 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e \
	I1205 08:00:50.181478    8928 kubeadm.go:319] 	--control-plane 
	I1205 08:00:50.181478    8928 kubeadm.go:319] 
	I1205 08:00:50.181478    8928 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 08:00:50.181478    8928 kubeadm.go:319] 
	I1205 08:00:50.181478    8928 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lv88co.p4fwv5zc13lrs8mz \
	I1205 08:00:50.181478    8928 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e 
	I1205 08:00:50.181478    8928 cni.go:84] Creating CNI manager for "bridge"
	I1205 08:00:50.183466    8928 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1205 08:00:49.622506    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:00:47.428477   10268 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:00:47.432436   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:47.492711   10268 main.go:143] libmachine: Using SSH client type: native
	I1205 08:00:47.493545   10268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62109 <nil> <nil>}
	I1205 08:00:47.493609   10268 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:00:49.101479   10268 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 08:00:47.421666939 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 08:00:49.101563   10268 machine.go:97] duration metric: took 3.4345283s to provisionDockerMachine
	I1205 08:00:49.101563   10268 client.go:176] duration metric: took 20.8944042s to LocalClient.Create
	I1205 08:00:49.101625   10268 start.go:167] duration metric: took 20.8944666s to libmachine.API.Create "flannel-218000"
	I1205 08:00:49.101668   10268 start.go:293] postStartSetup for "flannel-218000" (driver="docker")
	I1205 08:00:49.101668   10268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:00:49.106412   10268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:00:49.109405   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:49.165120   10268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62109 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa Username:docker}
	I1205 08:00:49.311131   10268 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:00:49.318739   10268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:00:49.318739   10268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:00:49.318739   10268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:00:49.318739   10268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:00:49.319746   10268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:00:49.324545   10268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:00:49.337248   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:00:49.373503   10268 start.go:296] duration metric: took 271.8307ms for postStartSetup
	I1205 08:00:49.380658   10268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-218000
	I1205 08:00:49.435870   10268 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\config.json ...
	I1205 08:00:49.443132   10268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:00:49.445690   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:49.499723   10268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62109 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa Username:docker}
	I1205 08:00:49.635334   10268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:00:49.649546   10268 start.go:128] duration metric: took 21.4487485s to createHost
	I1205 08:00:49.649546   10268 start.go:83] releasing machines lock for "flannel-218000", held for 21.4494736s
	I1205 08:00:49.653286   10268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-218000
	I1205 08:00:49.717822   10268 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:00:49.722103   10268 ssh_runner.go:195] Run: cat /version.json
	I1205 08:00:49.722103   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:49.725368   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:49.792253   10268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62109 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa Username:docker}
	I1205 08:00:49.795866   10268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62109 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa Username:docker}
	W1205 08:00:49.922650   10268 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:00:49.927263   10268 ssh_runner.go:195] Run: systemctl --version
	I1205 08:00:49.950869   10268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:00:49.962155   10268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:00:49.965155   10268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:00:50.015080   10268 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 08:00:50.015194   10268 start.go:496] detecting cgroup driver to use...
	I1205 08:00:50.015194   10268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:00:50.015394   10268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1205 08:00:50.033710   10268 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:00:50.033814   10268 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:00:50.054926   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:00:50.077212   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 08:00:50.091217   10268 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:00:50.095213   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:00:50.113209   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:00:50.133648   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:00:50.166531   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:00:50.186460   10268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:00:50.203457   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:00:50.227979   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:00:50.250720   10268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:00:50.273696   10268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:00:50.289703   10268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:00:50.311174   10268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:00:50.459747   10268 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:00:50.613636   10268 start.go:496] detecting cgroup driver to use...
	I1205 08:00:50.613755   10268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:00:50.618598   10268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:00:50.645555   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:00:50.676692   10268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:00:50.737531   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:00:50.762835   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:00:50.785611   10268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:00:50.817614   10268 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:00:50.829502   10268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:00:50.843360   10268 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:00:50.876626   10268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:00:51.023060   10268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:00:51.158135   10268 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:00:51.158135   10268 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:00:51.187836   10268 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:00:51.210750   10268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:00:51.360873   10268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:00:52.290799   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:00:52.318961   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:00:52.345765   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:00:52.371135   10268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:00:52.536191   10268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:00:52.686185   10268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:00:52.839485   10268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:00:52.866778   10268 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:00:52.890943   10268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:00:53.035706   10268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:00:53.145751   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:00:53.164289   10268 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:00:53.169631   10268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:00:53.179008   10268 start.go:564] Will wait 60s for crictl version
	I1205 08:00:53.183072   10268 ssh_runner.go:195] Run: which crictl
	I1205 08:00:53.194325   10268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:00:53.233897   10268 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:00:53.237509   10268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:00:53.283708   10268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:00:50.190468    8928 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 08:00:50.252516    8928 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 08:00:50.348654    8928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 08:00:50.357154    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:50.357255    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-218000 minikube.k8s.io/updated_at=2025_12_05T08_00_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=enable-default-cni-218000 minikube.k8s.io/primary=true
	I1205 08:00:50.447819    8928 ops.go:34] apiserver oom_adj: -16
	I1205 08:00:50.761500    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:51.260716    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:51.760588    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:52.259917    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:52.761934    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:53.259606    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:53.760655    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:54.258627    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:54.761289    8928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:54.870259    8928 kubeadm.go:1114] duration metric: took 4.5215331s to wait for elevateKubeSystemPrivileges
	I1205 08:00:54.870259    8928 kubeadm.go:403] duration metric: took 28.3436084s to StartCluster
	I1205 08:00:54.870790    8928 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.870935    8928 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:00:54.872599    8928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.873790    8928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 08:00:54.873899    8928 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:00:54.874016    8928 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:00:54.874199    8928 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-218000"
	I1205 08:00:54.874199    8928 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-218000"
	I1205 08:00:54.874199    8928 host.go:66] Checking if "enable-default-cni-218000" exists ...
	I1205 08:00:54.874199    8928 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-218000"
	I1205 08:00:54.874199    8928 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-218000"
	I1205 08:00:54.874199    8928 config.go:182] Loaded profile config "enable-default-cni-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:00:54.876691    8928 out.go:179] * Verifying Kubernetes components...
	I1205 08:00:54.887914    8928 cli_runner.go:164] Run: docker container inspect enable-default-cni-218000 --format={{.State.Status}}
	I1205 08:00:54.887914    8928 cli_runner.go:164] Run: docker container inspect enable-default-cni-218000 --format={{.State.Status}}
	I1205 08:00:54.889911    8928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:00:54.940926    8928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:00:54.943903    8928 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:00:54.943903    8928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:00:54.948923    8928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-218000
	I1205 08:00:54.962902    8928 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-218000"
	I1205 08:00:54.962902    8928 host.go:66] Checking if "enable-default-cni-218000" exists ...
	I1205 08:00:54.972896    8928 cli_runner.go:164] Run: docker container inspect enable-default-cni-218000 --format={{.State.Status}}
	I1205 08:00:55.005899    8928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62030 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-218000\id_rsa Username:docker}
	I1205 08:00:55.021909    8928 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:00:55.021909    8928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:00:55.024912    8928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-218000
	I1205 08:00:55.086674    8928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62030 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-218000\id_rsa Username:docker}
	I1205 08:00:55.371669    8928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 08:00:55.450447    8928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:00:55.467219    8928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:00:55.673136    8928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:00:56.253417    8928 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1205 08:00:56.258759    8928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-218000
	I1205 08:00:56.319344    8928 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-218000" to be "Ready" ...
	I1205 08:00:56.350528    8928 node_ready.go:49] node "enable-default-cni-218000" is "Ready"
	I1205 08:00:56.350528    8928 node_ready.go:38] duration metric: took 31.1829ms for node "enable-default-cni-218000" to be "Ready" ...
	I1205 08:00:56.350528    8928 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:00:56.356541    8928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:00:56.761244    8928 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-218000" context rescaled to 1 replicas
	I1205 08:00:56.946568    8928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4792816s)
	I1205 08:00:56.946568    8928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.2734121s)
	I1205 08:00:56.946568    8928 api_server.go:72] duration metric: took 2.072584s to wait for apiserver process to appear ...
	I1205 08:00:56.946568    8928 api_server.go:88] waiting for apiserver healthz status ...
	I1205 08:00:56.946568    8928 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62035/healthz ...
	I1205 08:00:56.957519    8928 api_server.go:279] https://127.0.0.1:62035/healthz returned 200:
	ok
	I1205 08:00:56.960523    8928 api_server.go:141] control plane version: v1.34.2
	I1205 08:00:56.960523    8928 api_server.go:131] duration metric: took 13.9549ms to wait for apiserver health ...
	I1205 08:00:56.960523    8928 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 08:00:56.969528    8928 system_pods.go:59] 8 kube-system pods found
	I1205 08:00:56.969528    8928 system_pods.go:61] "coredns-66bc5c9577-2h45r" [4d4e8bf9-56e5-4931-baf4-413c3635c11f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:56.969528    8928 system_pods.go:61] "coredns-66bc5c9577-gzk4l" [463b363d-1f7a-4cdc-adb6-89c7ced3a2db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:56.969528    8928 system_pods.go:61] "etcd-enable-default-cni-218000" [a0dab7c7-526a-4910-80bb-d2a181fde626] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 08:00:56.969528    8928 system_pods.go:61] "kube-apiserver-enable-default-cni-218000" [6f970997-cf7e-43c7-a522-ce95c21df3d9] Running
	I1205 08:00:56.969528    8928 system_pods.go:61] "kube-controller-manager-enable-default-cni-218000" [e4d3f771-7905-4de3-b701-bef4a05a2dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:00:56.969528    8928 system_pods.go:61] "kube-proxy-rhcz4" [b7390797-daa5-4267-965f-6a10baeb2f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:00:56.969528    8928 system_pods.go:61] "kube-scheduler-enable-default-cni-218000" [cca8af9a-b099-455f-82dc-fadde3ff88b0] Running
	I1205 08:00:56.969528    8928 system_pods.go:61] "storage-provisioner" [7eb5c70b-6783-4ade-9671-923f89ffdff4] Pending
	I1205 08:00:56.969528    8928 system_pods.go:74] duration metric: took 9.0047ms to wait for pod list to return data ...
	I1205 08:00:56.969528    8928 default_sa.go:34] waiting for default service account to be created ...
	I1205 08:00:57.048552    8928 default_sa.go:45] found service account: "default"
	I1205 08:00:57.048552    8928 default_sa.go:55] duration metric: took 79.0231ms for default service account to be created ...
	I1205 08:00:57.048627    8928 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 08:00:57.057688    8928 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 08:00:53.324001   10268 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.0.4 ...
	I1205 08:00:53.327018   10268 cli_runner.go:164] Run: docker exec -t flannel-218000 dig +short host.docker.internal
	I1205 08:00:53.461321   10268 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:00:53.465260   10268 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:00:53.471977   10268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:00:53.494424   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:00:53.550157   10268 kubeadm.go:884] updating cluster {Name:flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:00:53.550203   10268 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 08:00:53.554115   10268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:00:53.586727   10268 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:00:53.586727   10268 docker.go:621] Images already preloaded, skipping extraction
	I1205 08:00:53.590563   10268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:00:53.626590   10268 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:00:53.626590   10268 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:00:53.626590   10268 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 docker true true} ...
	I1205 08:00:53.626590   10268 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-218000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1205 08:00:53.630804   10268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:00:53.717328   10268 cni.go:84] Creating CNI manager for "flannel"
	I1205 08:00:53.717328   10268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 08:00:53.717328   10268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-218000 NodeName:flannel-218000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:00:53.717328   10268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "flannel-218000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:00:53.722458   10268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 08:00:53.735438   10268 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:00:53.740332   10268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:00:53.759658   10268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 08:00:53.782029   10268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 08:00:53.802885   10268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1205 08:00:53.829975   10268 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:00:53.837873   10268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:00:53.860923   10268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:00:54.014482   10268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:00:54.037803   10268 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000 for IP: 192.168.94.2
	I1205 08:00:54.037803   10268 certs.go:195] generating shared ca certs ...
	I1205 08:00:54.037803   10268 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.038658   10268 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:00:54.039280   10268 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:00:54.039409   10268 certs.go:257] generating profile certs ...
	I1205 08:00:54.039778   10268 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\client.key
	I1205 08:00:54.039778   10268 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\client.crt with IP's: []
	I1205 08:00:54.180187   10268 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\client.crt ...
	I1205 08:00:54.180187   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\client.crt: {Name:mk1bcfab5f81fe6c7e6dd88275164b745f4054ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.182198   10268 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\client.key ...
	I1205 08:00:54.182198   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\client.key: {Name:mk40070d90151b5e2f5eb763b5b4c5794e6c149c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.183183   10268 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.key.8a55c941
	I1205 08:00:54.183597   10268 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.crt.8a55c941 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1205 08:00:54.244244   10268 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.crt.8a55c941 ...
	I1205 08:00:54.244244   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.crt.8a55c941: {Name:mkb6bbc7bd478c391490f268e60771513bfcc93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.245895   10268 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.key.8a55c941 ...
	I1205 08:00:54.245895   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.key.8a55c941: {Name:mkc3e46ee94df00351bc6be8834ffa61acf2905b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.246896   10268 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.crt.8a55c941 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.crt
	I1205 08:00:54.260614   10268 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.key.8a55c941 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.key
	I1205 08:00:54.261626   10268 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.key
	I1205 08:00:54.261626   10268 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.crt with IP's: []
	I1205 08:00:54.383366   10268 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.crt ...
	I1205 08:00:54.383366   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.crt: {Name:mk427808cdea7a101d7a6b86b2463eb0415842a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.384173   10268 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.key ...
	I1205 08:00:54.384173   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.key: {Name:mkfb7e3f9a87f998214288fc2ca0f3588eff3467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:54.400297   10268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:00:54.400297   10268 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:00:54.400829   10268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:00:54.400932   10268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:00:54.400932   10268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:00:54.401456   10268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:00:54.401489   10268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:00:54.403345   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:00:54.433600   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:00:54.467467   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:00:54.497229   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:00:54.530216   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 08:00:54.561650   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 08:00:54.591843   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:00:54.622223   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-218000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 08:00:54.649249   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:00:54.681448   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:00:54.716018   10268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:00:54.752296   10268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:00:54.781553   10268 ssh_runner.go:195] Run: openssl version
	I1205 08:00:54.797575   10268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:00:54.815571   10268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:00:54.846858   10268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:00:54.856170   10268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:00:54.861797   10268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:00:54.925904   10268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:00:54.945910   10268 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8036.pem /etc/ssl/certs/51391683.0
	I1205 08:00:54.969923   10268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:00:54.987898   10268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:00:55.004896   10268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:00:55.011907   10268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:00:55.015903   10268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:00:55.080668   10268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:00:55.097667   10268 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/80362.pem /etc/ssl/certs/3ec20f2e.0
	I1205 08:00:55.114960   10268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:55.131475   10268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:00:55.152066   10268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:55.162576   10268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:55.167096   10268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:00:55.216002   10268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:00:55.234129   10268 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 08:00:55.260864   10268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:00:55.272743   10268 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 08:00:55.272743   10268 kubeadm.go:401] StartCluster: {Name:flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:00:55.277306   10268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:00:55.308670   10268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:00:55.324664   10268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 08:00:55.339694   10268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 08:00:55.345676   10268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 08:00:55.359667   10268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 08:00:55.359667   10268 kubeadm.go:158] found existing configuration files:
	
	I1205 08:00:55.363705   10268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 08:00:55.377672   10268 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 08:00:55.381666   10268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 08:00:55.398594   10268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 08:00:55.411528   10268 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 08:00:55.415531   10268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 08:00:55.430524   10268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 08:00:55.443542   10268 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 08:00:55.450213   10268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 08:00:55.473728   10268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 08:00:55.502507   10268 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 08:00:55.505505   10268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 08:00:55.521540   10268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 08:00:55.686739   10268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 08:00:55.693253   10268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 08:00:55.797466   10268 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 08:00:57.060681    8928 addons.go:530] duration metric: took 2.1866302s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 08:00:57.072689    8928 system_pods.go:86] 8 kube-system pods found
	I1205 08:00:57.072689    8928 system_pods.go:89] "coredns-66bc5c9577-2h45r" [4d4e8bf9-56e5-4931-baf4-413c3635c11f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:57.072689    8928 system_pods.go:89] "coredns-66bc5c9577-gzk4l" [463b363d-1f7a-4cdc-adb6-89c7ced3a2db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:57.072689    8928 system_pods.go:89] "etcd-enable-default-cni-218000" [a0dab7c7-526a-4910-80bb-d2a181fde626] Running
	I1205 08:00:57.072689    8928 system_pods.go:89] "kube-apiserver-enable-default-cni-218000" [6f970997-cf7e-43c7-a522-ce95c21df3d9] Running
	I1205 08:00:57.072689    8928 system_pods.go:89] "kube-controller-manager-enable-default-cni-218000" [e4d3f771-7905-4de3-b701-bef4a05a2dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:00:57.072689    8928 system_pods.go:89] "kube-proxy-rhcz4" [b7390797-daa5-4267-965f-6a10baeb2f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:00:57.072689    8928 system_pods.go:89] "kube-scheduler-enable-default-cni-218000" [cca8af9a-b099-455f-82dc-fadde3ff88b0] Running
	I1205 08:00:57.072689    8928 system_pods.go:89] "storage-provisioner" [7eb5c70b-6783-4ade-9671-923f89ffdff4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:00:57.072689    8928 retry.go:31] will retry after 217.87456ms: missing components: kube-dns, kube-proxy
	I1205 08:00:57.300363    8928 system_pods.go:86] 8 kube-system pods found
	I1205 08:00:57.300363    8928 system_pods.go:89] "coredns-66bc5c9577-2h45r" [4d4e8bf9-56e5-4931-baf4-413c3635c11f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:57.300363    8928 system_pods.go:89] "coredns-66bc5c9577-gzk4l" [463b363d-1f7a-4cdc-adb6-89c7ced3a2db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:57.300363    8928 system_pods.go:89] "etcd-enable-default-cni-218000" [a0dab7c7-526a-4910-80bb-d2a181fde626] Running
	I1205 08:00:57.300363    8928 system_pods.go:89] "kube-apiserver-enable-default-cni-218000" [6f970997-cf7e-43c7-a522-ce95c21df3d9] Running
	I1205 08:00:57.300363    8928 system_pods.go:89] "kube-controller-manager-enable-default-cni-218000" [e4d3f771-7905-4de3-b701-bef4a05a2dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:00:57.300363    8928 system_pods.go:89] "kube-proxy-rhcz4" [b7390797-daa5-4267-965f-6a10baeb2f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:00:57.300363    8928 system_pods.go:89] "kube-scheduler-enable-default-cni-218000" [cca8af9a-b099-455f-82dc-fadde3ff88b0] Running
	I1205 08:00:57.300363    8928 system_pods.go:89] "storage-provisioner" [7eb5c70b-6783-4ade-9671-923f89ffdff4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:00:57.300363    8928 retry.go:31] will retry after 322.690907ms: missing components: kube-dns, kube-proxy
	I1205 08:00:57.655094    8928 system_pods.go:86] 8 kube-system pods found
	I1205 08:00:57.655094    8928 system_pods.go:89] "coredns-66bc5c9577-2h45r" [4d4e8bf9-56e5-4931-baf4-413c3635c11f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:57.655257    8928 system_pods.go:89] "coredns-66bc5c9577-gzk4l" [463b363d-1f7a-4cdc-adb6-89c7ced3a2db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:57.655257    8928 system_pods.go:89] "etcd-enable-default-cni-218000" [a0dab7c7-526a-4910-80bb-d2a181fde626] Running
	I1205 08:00:57.655257    8928 system_pods.go:89] "kube-apiserver-enable-default-cni-218000" [6f970997-cf7e-43c7-a522-ce95c21df3d9] Running
	I1205 08:00:57.655257    8928 system_pods.go:89] "kube-controller-manager-enable-default-cni-218000" [e4d3f771-7905-4de3-b701-bef4a05a2dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:00:57.655257    8928 system_pods.go:89] "kube-proxy-rhcz4" [b7390797-daa5-4267-965f-6a10baeb2f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:00:57.655341    8928 system_pods.go:89] "kube-scheduler-enable-default-cni-218000" [cca8af9a-b099-455f-82dc-fadde3ff88b0] Running
	I1205 08:00:57.655341    8928 system_pods.go:89] "storage-provisioner" [7eb5c70b-6783-4ade-9671-923f89ffdff4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:00:57.655341    8928 retry.go:31] will retry after 369.19769ms: missing components: kube-dns, kube-proxy
	I1205 08:00:58.043824    8928 system_pods.go:86] 8 kube-system pods found
	I1205 08:00:58.043824    8928 system_pods.go:89] "coredns-66bc5c9577-2h45r" [4d4e8bf9-56e5-4931-baf4-413c3635c11f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:58.043824    8928 system_pods.go:89] "coredns-66bc5c9577-gzk4l" [463b363d-1f7a-4cdc-adb6-89c7ced3a2db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:58.043824    8928 system_pods.go:89] "etcd-enable-default-cni-218000" [a0dab7c7-526a-4910-80bb-d2a181fde626] Running
	I1205 08:00:58.043824    8928 system_pods.go:89] "kube-apiserver-enable-default-cni-218000" [6f970997-cf7e-43c7-a522-ce95c21df3d9] Running
	I1205 08:00:58.043824    8928 system_pods.go:89] "kube-controller-manager-enable-default-cni-218000" [e4d3f771-7905-4de3-b701-bef4a05a2dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:00:58.043824    8928 system_pods.go:89] "kube-proxy-rhcz4" [b7390797-daa5-4267-965f-6a10baeb2f04] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:00:58.043824    8928 system_pods.go:89] "kube-scheduler-enable-default-cni-218000" [cca8af9a-b099-455f-82dc-fadde3ff88b0] Running
	I1205 08:00:58.043824    8928 system_pods.go:89] "storage-provisioner" [7eb5c70b-6783-4ade-9671-923f89ffdff4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:00:58.043824    8928 retry.go:31] will retry after 455.364156ms: missing components: kube-proxy
	I1205 08:00:58.507373    8928 system_pods.go:86] 8 kube-system pods found
	I1205 08:00:58.507373    8928 system_pods.go:89] "coredns-66bc5c9577-2h45r" [4d4e8bf9-56e5-4931-baf4-413c3635c11f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:58.507373    8928 system_pods.go:89] "coredns-66bc5c9577-gzk4l" [463b363d-1f7a-4cdc-adb6-89c7ced3a2db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:00:58.507373    8928 system_pods.go:89] "etcd-enable-default-cni-218000" [a0dab7c7-526a-4910-80bb-d2a181fde626] Running
	I1205 08:00:58.507373    8928 system_pods.go:89] "kube-apiserver-enable-default-cni-218000" [6f970997-cf7e-43c7-a522-ce95c21df3d9] Running
	I1205 08:00:58.507373    8928 system_pods.go:89] "kube-controller-manager-enable-default-cni-218000" [e4d3f771-7905-4de3-b701-bef4a05a2dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:00:58.507373    8928 system_pods.go:89] "kube-proxy-rhcz4" [b7390797-daa5-4267-965f-6a10baeb2f04] Running
	I1205 08:00:58.507373    8928 system_pods.go:89] "kube-scheduler-enable-default-cni-218000" [cca8af9a-b099-455f-82dc-fadde3ff88b0] Running
	I1205 08:00:58.507373    8928 system_pods.go:89] "storage-provisioner" [7eb5c70b-6783-4ade-9671-923f89ffdff4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:00:58.507373    8928 system_pods.go:126] duration metric: took 1.4587229s to wait for k8s-apps to be running ...
	I1205 08:00:58.507373    8928 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 08:00:58.512336    8928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 08:00:58.531978    8928 system_svc.go:56] duration metric: took 24.6037ms WaitForService to wait for kubelet
	I1205 08:00:58.532036    8928 kubeadm.go:587] duration metric: took 3.658027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:00:58.532127    8928 node_conditions.go:102] verifying NodePressure condition ...
	I1205 08:00:58.540171    8928 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1205 08:00:58.540171    8928 node_conditions.go:123] node cpu capacity is 16
	I1205 08:00:58.540171    8928 node_conditions.go:105] duration metric: took 7.9799ms to run NodePressure ...
	I1205 08:00:58.540257    8928 start.go:242] waiting for startup goroutines ...
	I1205 08:00:58.540257    8928 start.go:247] waiting for cluster config update ...
	I1205 08:00:58.540286    8928 start.go:256] writing updated cluster config ...
	I1205 08:00:58.544442    8928 ssh_runner.go:195] Run: rm -f paused
	I1205 08:00:58.551764    8928 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:00:58.560124    8928 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2h45r" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 08:00:59.660282    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:01:00.572166    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:03.073890    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:05.075729    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:07.573698    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:09.576250    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	I1205 08:01:11.534836   10268 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 08:01:11.534836   10268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 08:01:11.534836   10268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 08:01:11.534836   10268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 08:01:11.535663   10268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 08:01:11.535774   10268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 08:01:11.537985   10268 out.go:252]   - Generating certificates and keys ...
	I1205 08:01:11.538175   10268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 08:01:11.538328   10268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 08:01:11.538522   10268 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 08:01:11.538727   10268 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 08:01:11.538816   10268 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 08:01:11.538978   10268 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 08:01:11.539129   10268 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 08:01:11.539129   10268 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-218000 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1205 08:01:11.539129   10268 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 08:01:11.539664   10268 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-218000 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1205 08:01:11.539841   10268 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 08:01:11.539986   10268 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 08:01:11.540127   10268 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 08:01:11.540300   10268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 08:01:11.540460   10268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 08:01:11.540626   10268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 08:01:11.540834   10268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 08:01:11.540834   10268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 08:01:11.540834   10268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 08:01:11.540834   10268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 08:01:11.541604   10268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 08:01:11.543877   10268 out.go:252]   - Booting up control plane ...
	I1205 08:01:11.544032   10268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 08:01:11.544217   10268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 08:01:11.544217   10268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 08:01:11.544217   10268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 08:01:11.544217   10268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 08:01:11.544991   10268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 08:01:11.545156   10268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 08:01:11.545327   10268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 08:01:11.545327   10268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 08:01:11.545915   10268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 08:01:11.546017   10268 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00183563s
	I1205 08:01:11.546215   10268 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 08:01:11.546446   10268 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1205 08:01:11.546694   10268 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 08:01:11.546694   10268 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 08:01:11.546694   10268 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.431247562s
	I1205 08:01:11.547431   10268 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.484864334s
	I1205 08:01:11.547584   10268 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502186138s
	I1205 08:01:11.547822   10268 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 08:01:11.547906   10268 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 08:01:11.547906   10268 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 08:01:11.548464   10268 kubeadm.go:319] [mark-control-plane] Marking the node flannel-218000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 08:01:11.548558   10268 kubeadm.go:319] [bootstrap-token] Using token: rmx9t9.ym9ro721w95j4b57
	I1205 08:01:11.551781   10268 out.go:252]   - Configuring RBAC rules ...
	I1205 08:01:11.552367   10268 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 08:01:11.552427   10268 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 08:01:11.552427   10268 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 08:01:11.553019   10268 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 08:01:11.553019   10268 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 08:01:11.553019   10268 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 08:01:11.553698   10268 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 08:01:11.553698   10268 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 08:01:11.553698   10268 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 08:01:11.553698   10268 kubeadm.go:319] 
	I1205 08:01:11.553698   10268 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 08:01:11.553698   10268 kubeadm.go:319] 
	I1205 08:01:11.553698   10268 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 08:01:11.553698   10268 kubeadm.go:319] 
	I1205 08:01:11.553698   10268 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 08:01:11.553698   10268 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 08:01:11.553698   10268 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 08:01:11.553698   10268 kubeadm.go:319] 
	I1205 08:01:11.554701   10268 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 08:01:11.554701   10268 kubeadm.go:319] 
	I1205 08:01:11.554701   10268 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 08:01:11.554701   10268 kubeadm.go:319] 
	I1205 08:01:11.554701   10268 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 08:01:11.554701   10268 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 08:01:11.554701   10268 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 08:01:11.554701   10268 kubeadm.go:319] 
	I1205 08:01:11.554701   10268 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 08:01:11.554701   10268 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 08:01:11.554701   10268 kubeadm.go:319] 
	I1205 08:01:11.554701   10268 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rmx9t9.ym9ro721w95j4b57 \
	I1205 08:01:11.554701   10268 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e \
	I1205 08:01:11.554701   10268 kubeadm.go:319] 	--control-plane 
	I1205 08:01:11.554701   10268 kubeadm.go:319] 
	I1205 08:01:11.554701   10268 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 08:01:11.554701   10268 kubeadm.go:319] 
	I1205 08:01:11.556119   10268 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rmx9t9.ym9ro721w95j4b57 \
	I1205 08:01:11.556197   10268 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e 
	I1205 08:01:11.556197   10268 cni.go:84] Creating CNI manager for "flannel"
	I1205 08:01:11.559448   10268 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	W1205 08:01:09.695070    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:01:11.569100   10268 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 08:01:11.578863   10268 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 08:01:11.578863   10268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1205 08:01:11.603058   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 08:01:11.986780   10268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 08:01:11.992755   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:11.992755   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-218000 minikube.k8s.io/updated_at=2025_12_05T08_01_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=flannel-218000 minikube.k8s.io/primary=true
	I1205 08:01:12.004949   10268 ops.go:34] apiserver oom_adj: -16
	I1205 08:01:12.155698   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1205 08:01:12.075077    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:14.571439    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	I1205 08:01:12.656799   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:13.156822   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:13.657122   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:14.156110   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:14.656828   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:15.156633   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:15.656573   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:16.156135   10268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:16.260494   10268 kubeadm.go:1114] duration metric: took 4.273622s to wait for elevateKubeSystemPrivileges
	I1205 08:01:16.260494   10268 kubeadm.go:403] duration metric: took 20.9874179s to StartCluster
	I1205 08:01:16.260572   10268 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:16.260699   10268 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:01:16.262703   10268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:16.263973   10268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 08:01:16.263973   10268 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:01:16.264167   10268 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:01:16.264331   10268 addons.go:70] Setting storage-provisioner=true in profile "flannel-218000"
	I1205 08:01:16.264410   10268 addons.go:239] Setting addon storage-provisioner=true in "flannel-218000"
	I1205 08:01:16.264520   10268 host.go:66] Checking if "flannel-218000" exists ...
	I1205 08:01:16.264647   10268 addons.go:70] Setting default-storageclass=true in profile "flannel-218000"
	I1205 08:01:16.264740   10268 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-218000"
	I1205 08:01:16.264740   10268 config.go:182] Loaded profile config "flannel-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:01:16.269566   10268 out.go:179] * Verifying Kubernetes components...
	I1205 08:01:16.278590   10268 cli_runner.go:164] Run: docker container inspect flannel-218000 --format={{.State.Status}}
	I1205 08:01:16.279779   10268 cli_runner.go:164] Run: docker container inspect flannel-218000 --format={{.State.Status}}
	I1205 08:01:16.279779   10268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:01:16.343807   10268 addons.go:239] Setting addon default-storageclass=true in "flannel-218000"
	I1205 08:01:16.343807   10268 host.go:66] Checking if "flannel-218000" exists ...
	I1205 08:01:16.347799   10268 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:01:16.349807   10268 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:01:16.349807   10268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:01:16.354794   10268 cli_runner.go:164] Run: docker container inspect flannel-218000 --format={{.State.Status}}
	I1205 08:01:16.354794   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:01:16.408787   10268 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:01:16.408787   10268 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:01:16.409787   10268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62109 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa Username:docker}
	I1205 08:01:16.411799   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:01:16.466796   10268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62109 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-218000\id_rsa Username:docker}
	I1205 08:01:16.758479   10268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 08:01:16.845855   10268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:01:16.862692   10268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:01:16.942490   10268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:01:17.552765   10268 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1205 08:01:17.557191   10268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" flannel-218000
	I1205 08:01:17.620097   10268 node_ready.go:35] waiting up to 15m0s for node "flannel-218000" to be "Ready" ...
	I1205 08:01:18.081491   10268 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-218000" context rescaled to 1 replicas
	I1205 08:01:18.141300   10268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1987907s)
	I1205 08:01:18.141415   10268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2785881s)
	I1205 08:01:18.163472   10268 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1205 08:01:16.572303    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:18.573685    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:19.730135    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:01:18.169591   10268 addons.go:530] duration metric: took 1.9053939s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1205 08:01:19.638736   10268 node_ready.go:57] node "flannel-218000" has "Ready":"False" status (will retry)
	W1205 08:01:22.126862   10268 node_ready.go:57] node "flannel-218000" has "Ready":"False" status (will retry)
	W1205 08:01:20.586546    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:23.071718    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:24.627393   10268 node_ready.go:57] node "flannel-218000" has "Ready":"False" status (will retry)
	W1205 08:01:27.126432   10268 node_ready.go:57] node "flannel-218000" has "Ready":"False" status (will retry)
	W1205 08:01:25.072278    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	W1205 08:01:27.074267    8928 pod_ready.go:104] pod "coredns-66bc5c9577-2h45r" is not "Ready", error: <nil>
	I1205 08:01:29.077771    8928 pod_ready.go:94] pod "coredns-66bc5c9577-2h45r" is "Ready"
	I1205 08:01:29.078315    8928 pod_ready.go:86] duration metric: took 30.5170815s for pod "coredns-66bc5c9577-2h45r" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.078315    8928 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gzk4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.099631    8928 pod_ready.go:99] pod "coredns-66bc5c9577-gzk4l" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-gzk4l" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-gzk4l" not found
	I1205 08:01:29.099667    8928 pod_ready.go:86] duration metric: took 21.3517ms for pod "coredns-66bc5c9577-gzk4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.108042    8928 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.117723    8928 pod_ready.go:94] pod "etcd-enable-default-cni-218000" is "Ready"
	I1205 08:01:29.117723    8928 pod_ready.go:86] duration metric: took 9.6809ms for pod "etcd-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.124301    8928 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.148143    8928 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-218000" is "Ready"
	I1205 08:01:29.148143    8928 pod_ready.go:86] duration metric: took 23.777ms for pod "kube-apiserver-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.153400    8928 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.468488    8928 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-218000" is "Ready"
	I1205 08:01:29.468488    8928 pod_ready.go:86] duration metric: took 315.0837ms for pod "kube-controller-manager-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:29.668006    8928 pod_ready.go:83] waiting for pod "kube-proxy-rhcz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:30.067091    8928 pod_ready.go:94] pod "kube-proxy-rhcz4" is "Ready"
	I1205 08:01:30.067091    8928 pod_ready.go:86] duration metric: took 399.0794ms for pod "kube-proxy-rhcz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:30.267127    8928 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:30.667443    8928 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-218000" is "Ready"
	I1205 08:01:30.667443    8928 pod_ready.go:86] duration metric: took 399.7734ms for pod "kube-scheduler-enable-default-cni-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:30.667443    8928 pod_ready.go:40] duration metric: took 32.1151687s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:01:30.771031    8928 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:01:30.775475    8928 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-218000" cluster and "default" namespace by default
	W1205 08:01:29.766484    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:01:28.627259   10268 node_ready.go:49] node "flannel-218000" is "Ready"
	I1205 08:01:28.627361   10268 node_ready.go:38] duration metric: took 11.0070445s for node "flannel-218000" to be "Ready" ...
	I1205 08:01:28.627411   10268 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:01:28.633598   10268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:01:28.653249   10268 api_server.go:72] duration metric: took 12.3889953s to wait for apiserver process to appear ...
	I1205 08:01:28.653249   10268 api_server.go:88] waiting for apiserver healthz status ...
	I1205 08:01:28.653249   10268 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62113/healthz ...
	I1205 08:01:28.661270   10268 api_server.go:279] https://127.0.0.1:62113/healthz returned 200:
	ok
	I1205 08:01:28.664247   10268 api_server.go:141] control plane version: v1.34.2
	I1205 08:01:28.664247   10268 api_server.go:131] duration metric: took 10.9978ms to wait for apiserver health ...
	I1205 08:01:28.664247   10268 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 08:01:28.671264   10268 system_pods.go:59] 7 kube-system pods found
	I1205 08:01:28.671264   10268 system_pods.go:61] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:28.671264   10268 system_pods.go:61] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:28.671264   10268 system_pods.go:61] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:28.671264   10268 system_pods.go:61] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:28.671264   10268 system_pods.go:61] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:28.671264   10268 system_pods.go:61] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:28.671264   10268 system_pods.go:61] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:28.671264   10268 system_pods.go:74] duration metric: took 7.0163ms to wait for pod list to return data ...
	I1205 08:01:28.671264   10268 default_sa.go:34] waiting for default service account to be created ...
	I1205 08:01:28.677250   10268 default_sa.go:45] found service account: "default"
	I1205 08:01:28.677250   10268 default_sa.go:55] duration metric: took 5.9856ms for default service account to be created ...
	I1205 08:01:28.677250   10268 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 08:01:28.681253   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:28.682258   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:28.682258   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:28.682258   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:28.682258   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:28.682258   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:28.682258   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:28.682258   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:28.682258   10268 retry.go:31] will retry after 188.889868ms: missing components: kube-dns
	I1205 08:01:28.878841   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:28.878841   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:28.878841   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:28.878841   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:28.878841   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:28.878841   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:28.878841   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:28.878841   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:28.878841   10268 retry.go:31] will retry after 317.411488ms: missing components: kube-dns
	I1205 08:01:29.203208   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:29.203208   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:29.203208   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:29.203208   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:29.203208   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:29.203208   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:29.203208   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:29.203208   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:29.204275   10268 retry.go:31] will retry after 385.857011ms: missing components: kube-dns
	I1205 08:01:29.598073   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:29.598642   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:29.598642   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:29.598642   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:29.598642   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:29.598674   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:29.598674   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:29.598674   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:29.598703   10268 retry.go:31] will retry after 420.323086ms: missing components: kube-dns
	I1205 08:01:30.027559   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:30.027559   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:30.027559   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:30.027559   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:30.027559   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:30.027559   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:30.027559   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:30.027559   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:30.028616   10268 retry.go:31] will retry after 529.04919ms: missing components: kube-dns
	I1205 08:01:30.565556   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:30.565627   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:30.565627   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:30.565627   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:30.565627   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:30.565627   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:30.565627   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:30.565627   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:30.565759   10268 retry.go:31] will retry after 688.323572ms: missing components: kube-dns
	I1205 08:01:31.263476   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:31.263555   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:31.263592   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:31.263592   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:31.263592   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:31.263592   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:31.263654   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:31.263654   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Running
	I1205 08:01:31.263654   10268 retry.go:31] will retry after 1.106902837s: missing components: kube-dns
	I1205 08:01:32.378211   10268 system_pods.go:86] 7 kube-system pods found
	I1205 08:01:32.378348   10268 system_pods.go:89] "coredns-66bc5c9577-wq85f" [f0ccca76-8fb6-4d8b-afd7-ade2cee57958] Running
	I1205 08:01:32.378348   10268 system_pods.go:89] "etcd-flannel-218000" [a188c8c3-9065-4001-bfd8-bc3d8f2926f0] Running
	I1205 08:01:32.378348   10268 system_pods.go:89] "kube-apiserver-flannel-218000" [95e3c06c-d6dc-486b-b5ac-fb575bb1e782] Running
	I1205 08:01:32.378348   10268 system_pods.go:89] "kube-controller-manager-flannel-218000" [254b19cd-6334-4b3f-a9d9-2a36f3179c4b] Running
	I1205 08:01:32.378348   10268 system_pods.go:89] "kube-proxy-qf54r" [93bd78ff-2fbb-4e9d-999c-faf5a35f6212] Running
	I1205 08:01:32.378348   10268 system_pods.go:89] "kube-scheduler-flannel-218000" [75d8e045-c85c-4830-a147-b089ae1ccad5] Running
	I1205 08:01:32.378348   10268 system_pods.go:89] "storage-provisioner" [d59efc00-1c80-433c-914e-7e41dbb50ce4] Running
	I1205 08:01:32.378453   10268 system_pods.go:126] duration metric: took 3.7011447s to wait for k8s-apps to be running ...
	I1205 08:01:32.378453   10268 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 08:01:32.382985   10268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 08:01:32.402011   10268 system_svc.go:56] duration metric: took 23.5573ms WaitForService to wait for kubelet
	I1205 08:01:32.402074   10268 kubeadm.go:587] duration metric: took 16.1377603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:01:32.402201   10268 node_conditions.go:102] verifying NodePressure condition ...
	I1205 08:01:32.409888   10268 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1205 08:01:32.409888   10268 node_conditions.go:123] node cpu capacity is 16
	I1205 08:01:32.409888   10268 node_conditions.go:105] duration metric: took 7.6289ms to run NodePressure ...
	I1205 08:01:32.409888   10268 start.go:242] waiting for startup goroutines ...
	I1205 08:01:32.409888   10268 start.go:247] waiting for cluster config update ...
	I1205 08:01:32.409888   10268 start.go:256] writing updated cluster config ...
	I1205 08:01:32.415364   10268 ssh_runner.go:195] Run: rm -f paused
	I1205 08:01:32.423230   10268 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:01:32.431082   10268 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wq85f" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:32.441505   10268 pod_ready.go:94] pod "coredns-66bc5c9577-wq85f" is "Ready"
	I1205 08:01:32.441505   10268 pod_ready.go:86] duration metric: took 10.3625ms for pod "coredns-66bc5c9577-wq85f" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:32.445194   10268 pod_ready.go:83] waiting for pod "etcd-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:32.452878   10268 pod_ready.go:94] pod "etcd-flannel-218000" is "Ready"
	I1205 08:01:32.452878   10268 pod_ready.go:86] duration metric: took 7.6838ms for pod "etcd-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:32.456325   10268 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:32.463305   10268 pod_ready.go:94] pod "kube-apiserver-flannel-218000" is "Ready"
	I1205 08:01:32.463305   10268 pod_ready.go:86] duration metric: took 6.98ms for pod "kube-apiserver-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:32.467228   10268 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:32.830852   10268 pod_ready.go:94] pod "kube-controller-manager-flannel-218000" is "Ready"
	I1205 08:01:32.830852   10268 pod_ready.go:86] duration metric: took 363.6183ms for pod "kube-controller-manager-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:33.033379   10268 pod_ready.go:83] waiting for pod "kube-proxy-qf54r" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:33.430637   10268 pod_ready.go:94] pod "kube-proxy-qf54r" is "Ready"
	I1205 08:01:33.431198   10268 pod_ready.go:86] duration metric: took 397.812ms for pod "kube-proxy-qf54r" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:33.635153   10268 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:34.032456   10268 pod_ready.go:94] pod "kube-scheduler-flannel-218000" is "Ready"
	I1205 08:01:34.032456   10268 pod_ready.go:86] duration metric: took 397.2974ms for pod "kube-scheduler-flannel-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:01:34.032456   10268 pod_ready.go:40] duration metric: took 1.609201s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:01:34.155486   10268 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:01:34.158464   10268 out.go:179] * Done! kubectl is now configured to use "flannel-218000" cluster and "default" namespace by default
	W1205 08:01:39.802268    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:01:45.188319    1056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 08:01:45.188319    1056 kubeadm.go:319] 
	I1205 08:01:45.188319    1056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 08:01:45.191322    1056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 08:01:45.192319    1056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 08:01:45.192319    1056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 08:01:45.192319    1056 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 08:01:45.192319    1056 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_INET: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 08:01:45.193324    1056 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 08:01:45.194315    1056 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] OS: Linux
	I1205 08:01:45.195329    1056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 08:01:45.195329    1056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 08:01:45.196315    1056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 08:01:45.197312    1056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 08:01:45.197312    1056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 08:01:45.197312    1056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 08:01:45.200324    1056 out.go:252]   - Generating certificates and keys ...
	I1205 08:01:45.200324    1056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 08:01:45.200324    1056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 08:01:45.201316    1056 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 08:01:45.202312    1056 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 08:01:45.202312    1056 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 08:01:45.202312    1056 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 08:01:45.202312    1056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 08:01:45.203321    1056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 08:01:45.203321    1056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 08:01:45.203321    1056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 08:01:45.203321    1056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 08:01:45.207317    1056 out.go:252]   - Booting up control plane ...
	I1205 08:01:45.207317    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 08:01:45.207317    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 08:01:45.208321    1056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 08:01:45.209322    1056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 08:01:45.209322    1056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000992736s
	I1205 08:01:45.209322    1056 kubeadm.go:319] 
	I1205 08:01:45.209322    1056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- The kubelet is not running
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 08:01:45.210317    1056 kubeadm.go:319] 
	I1205 08:01:45.210317    1056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 08:01:45.210317    1056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 08:01:45.210317    1056 kubeadm.go:319] 
	I1205 08:01:45.210317    1056 kubeadm.go:403] duration metric: took 8m4.5341682s to StartCluster
	I1205 08:01:45.210317    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 08:01:45.214317    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 08:01:45.280016    1056 cri.go:89] found id: ""
	I1205 08:01:45.280016    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.280016    1056 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:01:45.280016    1056 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 08:01:45.284017    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 08:01:45.326531    1056 cri.go:89] found id: ""
	I1205 08:01:45.326531    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.326531    1056 logs.go:284] No container was found matching "etcd"
	I1205 08:01:45.326531    1056 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 08:01:45.332138    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 08:01:45.377345    1056 cri.go:89] found id: ""
	I1205 08:01:45.377438    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.377438    1056 logs.go:284] No container was found matching "coredns"
	I1205 08:01:45.377562    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 08:01:45.382104    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 08:01:45.425147    1056 cri.go:89] found id: ""
	I1205 08:01:45.425147    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.425147    1056 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:01:45.425147    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 08:01:45.429455    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 08:01:45.478730    1056 cri.go:89] found id: ""
	I1205 08:01:45.478730    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.478730    1056 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:01:45.478730    1056 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 08:01:45.482728    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 08:01:45.533489    1056 cri.go:89] found id: ""
	I1205 08:01:45.533489    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.533489    1056 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:01:45.533489    1056 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 08:01:45.538462    1056 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 08:01:45.588632    1056 cri.go:89] found id: ""
	I1205 08:01:45.588632    1056 logs.go:282] 0 containers: []
	W1205 08:01:45.588632    1056 logs.go:284] No container was found matching "kindnet"
	I1205 08:01:45.588632    1056 logs.go:123] Gathering logs for kubelet ...
	I1205 08:01:45.588632    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:01:45.650205    1056 logs.go:123] Gathering logs for dmesg ...
	I1205 08:01:45.650205    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:01:45.690570    1056 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:01:45.690570    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:01:45.774146    1056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:01:45.763926   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.764746   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767033   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767902   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.770073   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:01:45.763926   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.764746   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767033   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.767902   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:45.770073   10870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:01:45.774146    1056 logs.go:123] Gathering logs for Docker ...
	I1205 08:01:45.774146    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:01:45.807714    1056 logs.go:123] Gathering logs for container status ...
	I1205 08:01:45.807714    1056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 08:01:45.862103    1056 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 08:01:45.862191    1056 out.go:285] * 
	W1205 08:01:45.862283    1056 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 08:01:45.862283    1056 out.go:285] * 
	W1205 08:01:45.864148    1056 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:01:45.868140    1056 out.go:203] 
	W1205 08:01:45.871129    1056 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992736s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 08:01:45.872135    1056 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 08:01:45.872135    1056 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 08:01:45.875127    1056 out.go:203] 
	
	
	==> Docker <==
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735621442Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735810362Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735822264Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735827264Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735874969Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.736046888Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.736251309Z" level=info msg="Initializing buildkit"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.916830207Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926605346Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926832270Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926915179Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926837171Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:53:09 newest-cni-042100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:53:10 newest-cni-042100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:53:10 newest-cni-042100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:01:48.350317   11040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:48.351348   11040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:48.353808   11040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:48.355088   11040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:01:48.356179   11040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.382967] tmpfs: Unknown parameter 'noswap'
	[  +0.719307] CPU: 0 PID: 443987 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fa2707b0b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fa2707b0af6.
	[  +0.000001] RSP: 002b:00007ffec3cf54c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000004] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.868195] CPU: 12 PID: 444159 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f7464557b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f7464557af6.
	[  +0.000001] RSP: 002b:00007ffd7950fb50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +9.949526] tmpfs: Unknown parameter 'noswap'
	[Dec 5 08:01] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 08:01:48 up  3:35,  0 user,  load average: 5.49, 5.16, 4.30
	Linux newest-cni-042100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:01:45 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:46 newest-cni-042100 kubelet[10897]: E1205 08:01:46.350431   10897 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:46 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:47 newest-cni-042100 kubelet[10921]: E1205 08:01:47.049556   10921 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:01:47 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:01:47 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:01:47 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 08:01:47 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:47 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:47 newest-cni-042100 kubelet[10972]: E1205 08:01:47.847613   10972 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:01:47 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:01:47 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:01:48 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 05 08:01:48 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:48 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:01:48 newest-cni-042100 kubelet[11051]: E1205 08:01:48.585753   11051 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:01:48 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:01:48 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100: exit status 6 (597.0374ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 08:01:49.910108   11768 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-042100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (537.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-104100 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-104100 create -f testdata\busybox.yaml: exit status 1 (105.4829ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-104100" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-104100 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-104100
helpers_test.go:243: (dbg) docker inspect no-preload-104100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043",
	        "Created": "2025-12-05T07:47:18.090294673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:47:18.384905784Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hosts",
	        "LogPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043-json.log",
	        "Name": "/no-preload-104100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-104100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-104100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-104100",
	                "Source": "/var/lib/docker/volumes/no-preload-104100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-104100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-104100",
	                "name.minikube.sigs.k8s.io": "no-preload-104100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9cf4340ae5aa61b1664fdb6401e79df00ee5d95456b58c783a5450634e707fb",
	            "SandboxKey": "/var/run/docker/netns/f9cf4340ae5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60497"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60499"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60500"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-104100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "707b5f83051fc4c181f3506b97f5ea358824531428895a55938badd3159b6c9f",
	                    "EndpointID": "17b4da3586c46e948162b9510e7b2371f3a3cf1ebbe0c711b2fa91578460e0c9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-104100",
	                        "5f2a793d7573"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 6 (624.7782ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:56:01.647751   10528 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25: (1.1409072s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-218000 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/kubernetes/kubelet.conf                                                           │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /var/lib/kubelet/config.yaml                                                           │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status docker --all --full --no-pager                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat docker --no-pager                                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/docker/daemon.json                                                                │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo docker system info                                                                         │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status cri-docker --all --full --no-pager                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat cri-docker --no-pager                                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cri-dockerd --version                                                                      │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status containerd --all --full --no-pager                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat containerd --no-pager                                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/containerd/config.toml                                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo containerd config dump                                                                     │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status crio --all --full --no-pager                                              │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	│ ssh     │ -p auto-218000 sudo systemctl cat crio --no-pager                                                              │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo crio config                                                                                │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ delete  │ -p auto-218000                                                                                                 │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ start   │ -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-863300                                                                                   │ kubernetes-upgrade-863300 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ start   │ -p calico-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker   │ calico-218000             │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:55:43
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:55:43.385785   11048 out.go:360] Setting OutFile to fd 1688 ...
	I1205 07:55:43.445538   11048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:43.445538   11048 out.go:374] Setting ErrFile to fd 840...
	I1205 07:55:43.445538   11048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:43.460218   11048 out.go:368] Setting JSON to false
	I1205 07:55:43.463643   11048 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12601,"bootTime":1764908742,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:55:43.463643   11048 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:55:43.467039   11048 out.go:179] * [calico-218000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:55:43.472324   11048 notify.go:221] Checking for updates...
	I1205 07:55:43.475120   11048 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:55:43.478124   11048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:55:43.480125   11048 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:55:43.483116   11048 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:55:43.485128   11048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:55:43.488117   11048 config.go:182] Loaded profile config "kindnet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:55:43.489119   11048 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:43.489119   11048 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:43.489119   11048 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:55:43.623399   11048 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:55:43.627393   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:43.878533   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:43.85365759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:43.883492   11048 out.go:179] * Using the docker driver based on user configuration
	I1205 07:55:41.623253    3768 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (14.452724s)
	I1205 07:55:41.623253    3768 kic.go:203] duration metric: took 14.4566514s to extract preloaded images to volume ...
	I1205 07:55:41.627859    3768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:41.863901    3768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:93 SystemTime:2025-12-05 07:55:41.838776023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:41.868259    3768 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:55:42.117545    3768 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-218000 --name kindnet-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-218000 --network kindnet-218000 --ip 192.168.94.2 --volume kindnet-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:55:43.388568    3768 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-218000 --name kindnet-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-218000 --network kindnet-218000 --ip 192.168.94.2 --volume kindnet-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b: (1.270844s)
	I1205 07:55:43.394720    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Running}}
	I1205 07:55:43.460218    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:43.516123    3768 cli_runner.go:164] Run: docker exec kindnet-218000 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:55:43.642405    3768 oci.go:144] the created container "kindnet-218000" has a running status.
	I1205 07:55:43.642405    3768 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa...
	I1205 07:55:43.880500    3768 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:55:43.953504    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:44.056802    3768 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:55:44.056802    3768 kic_runner.go:114] Args: [docker exec --privileged kindnet-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:55:43.885483   11048 start.go:309] selected driver: docker
	I1205 07:55:43.885483   11048 start.go:927] validating driver "docker" against <nil>
	I1205 07:55:43.885483   11048 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:55:43.929498   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:44.213788   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:44.194232325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:44.213788   11048 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:55:44.214786   11048 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:55:44.217784   11048 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 07:55:44.219783   11048 cni.go:84] Creating CNI manager for "calico"
	I1205 07:55:44.219783   11048 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 07:55:44.219783   11048 start.go:353] cluster config:
	{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:55:44.221785   11048 out.go:179] * Starting "calico-218000" primary control-plane node in "calico-218000" cluster
	I1205 07:55:44.225783   11048 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:55:44.227787   11048 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:55:44.231783   11048 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:55:44.231783   11048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:44.231783   11048 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1205 07:55:44.232783   11048 cache.go:65] Caching tarball of preloaded images
	I1205 07:55:44.232783   11048 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1205 07:55:44.232783   11048 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1205 07:55:44.232783   11048 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-218000\config.json ...
	I1205 07:55:44.232783   11048 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-218000\config.json: {Name:mk91c6afceb766415a42b808b03437547163f98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:55:44.319041   11048 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:55:44.319041   11048 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:55:44.319041   11048 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:55:44.319041   11048 start.go:360] acquireMachinesLock for calico-218000: {Name:mkaef444365c0a217df0cccc3ef485884ea3ee5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:55:44.319041   11048 start.go:364] duration metric: took 0s to acquireMachinesLock for "calico-218000"
	I1205 07:55:44.319041   11048 start.go:93] Provisioning new machine with config: &{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:55:44.319041   11048 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:55:44.322048   11048 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:55:44.323048   11048 start.go:159] libmachine.API.Create for "calico-218000" (driver="docker")
	I1205 07:55:44.323048   11048 client.go:173] LocalClient.Create starting
	I1205 07:55:44.323048   11048 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:44.331040   11048 cli_runner.go:164] Run: docker network inspect calico-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:55:44.384044   11048 cli_runner.go:211] docker network inspect calico-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:55:44.387041   11048 network_create.go:284] running [docker network inspect calico-218000] to gather additional debugging logs...
	I1205 07:55:44.387041   11048 cli_runner.go:164] Run: docker network inspect calico-218000
	W1205 07:55:44.437033   11048 cli_runner.go:211] docker network inspect calico-218000 returned with exit code 1
	I1205 07:55:44.437033   11048 network_create.go:287] error running [docker network inspect calico-218000]: docker network inspect calico-218000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-218000 not found
	I1205 07:55:44.437033   11048 network_create.go:289] output of [docker network inspect calico-218000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-218000 not found
	
	** /stderr **
	I1205 07:55:44.440033   11048 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:55:44.518567   11048 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.549565   11048 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.581054   11048 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.612112   11048 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.644105   11048 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.676183   11048 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.707994   11048 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.726521   11048 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001705a40}
	I1205 07:55:44.726521   11048 network_create.go:124] attempt to create docker network calico-218000 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1205 07:55:44.730522   11048 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-218000 calico-218000
	I1205 07:55:44.872256   11048 network_create.go:108] docker network calico-218000 192.168.112.0/24 created
	I1205 07:55:44.872296   11048 kic.go:121] calculated static IP "192.168.112.2" for the "calico-218000" container
	I1205 07:55:44.887833   11048 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:55:44.947118   11048 cli_runner.go:164] Run: docker volume create calico-218000 --label name.minikube.sigs.k8s.io=calico-218000 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:55:45.020325   11048 oci.go:103] Successfully created a docker volume calico-218000
	I1205 07:55:45.024743   11048 cli_runner.go:164] Run: docker run --rm --name calico-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --entrypoint /usr/bin/test -v calico-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:55:46.189558   11048 cli_runner.go:217] Completed: docker run --rm --name calico-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --entrypoint /usr/bin/test -v calico-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.164797s)
	I1205 07:55:46.189558   11048 oci.go:107] Successfully prepared a docker volume calico-218000
	I1205 07:55:46.189558   11048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:46.189558   11048 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:55:46.195557   11048 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 07:55:44.191788    3768 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa...
	I1205 07:55:46.472372    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:46.525606    3768 machine.go:94] provisionDockerMachine start ...
	I1205 07:55:46.531024    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:46.591825    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:46.606706    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:46.606706    3768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:55:46.882633    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-218000
	
	I1205 07:55:46.882633    3768 ubuntu.go:182] provisioning hostname "kindnet-218000"
	I1205 07:55:46.886539    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:46.942319    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:46.943089    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:46.943089    3768 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-218000 && echo "kindnet-218000" | sudo tee /etc/hostname
	I1205 07:55:47.144763    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-218000
	
	I1205 07:55:47.148216    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.200257    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:47.200548    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:47.200548    3768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:55:47.383155    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:55:47.383235    3768 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:55:47.383267    3768 ubuntu.go:190] setting up certificates
	I1205 07:55:47.383348    3768 provision.go:84] configureAuth start
	I1205 07:55:47.386186    3768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-218000
	I1205 07:55:47.434188    3768 provision.go:143] copyHostCerts
	I1205 07:55:47.434188    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:55:47.434188    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:55:47.434188    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:55:47.435186    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:55:47.435186    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:55:47.435186    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:55:47.436186    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:55:47.436186    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:55:47.436186    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:55:47.437185    3768 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-218000 san=[127.0.0.1 192.168.94.2 kindnet-218000 localhost minikube]
	I1205 07:55:47.506006    3768 provision.go:177] copyRemoteCerts
	I1205 07:55:47.510770    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:55:47.513952    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.565725    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:47.689901    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:55:47.721502    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:55:47.749769    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1205 07:55:47.778148    3768 provision.go:87] duration metric: took 394.7705ms to configureAuth
	I1205 07:55:47.778148    3768 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:55:47.778148    3768 config.go:182] Loaded profile config "kindnet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:55:47.781148    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.831153    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:47.832148    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:47.832148    3768 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 07:55:48.034092    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 07:55:48.034092    3768 ubuntu.go:71] root file system type: overlay
	I1205 07:55:48.034092    3768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 07:55:48.038282    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:48.099329    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:48.099941    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:48.100168    3768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 07:55:48.308272    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 07:55:48.311928    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:48.367848    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:48.367927    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:48.367927    3768 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 07:55:56.232863    3504 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:55:56.233024    3504 kubeadm.go:319] 
	I1205 07:55:56.233374    3504 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:55:56.238199    3504 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:55:56.238199    3504 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:55:56.239229    3504 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:55:56.239951    3504 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:55:56.240038    3504 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:55:56.240149    3504 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:55:56.240900    3504 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:55:56.240989    3504 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:55:56.241160    3504 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:55:56.241262    3504 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:55:56.241353    3504 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:55:56.241527    3504 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:55:56.241709    3504 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:55:56.241841    3504 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:55:56.241965    3504 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:55:56.242178    3504 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:55:56.242300    3504 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:55:56.242449    3504 kubeadm.go:319] OS: Linux
	I1205 07:55:56.242570    3504 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:55:56.242721    3504 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:55:56.243457    3504 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:55:56.243517    3504 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:55:56.243675    3504 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:55:56.243773    3504 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:55:56.592452    3504 out.go:252]   - Generating certificates and keys ...
	I1205 07:55:56.593639    3504 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:55:56.593845    3504 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:55:56.594114    3504 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:55:56.594161    3504 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:55:56.594421    3504 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:55:56.594527    3504 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:55:56.594848    3504 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:55:56.594994    3504 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:55:56.595183    3504 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:55:56.595515    3504 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:55:56.595613    3504 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:55:56.595780    3504 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:55:56.595940    3504 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:55:56.596106    3504 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:55:56.596218    3504 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:55:56.596381    3504 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:55:56.596498    3504 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:55:56.596674    3504 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:55:56.596833    3504 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:55:56.652657    3504 out.go:252]   - Booting up control plane ...
	I1205 07:55:56.653102    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:55:56.653292    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:55:56.653474    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:55:56.653708    3504 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:55:56.653923    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:55:56.654155    3504 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:55:56.654392    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:55:56.654499    3504 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:55:56.654779    3504 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:55:56.655037    3504 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:55:56.655160    3504 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001446272s
	I1205 07:55:56.655263    3504 kubeadm.go:319] 
	I1205 07:55:56.655375    3504 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:55:56.655475    3504 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:55:56.655710    3504 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:55:56.655741    3504 kubeadm.go:319] 
	I1205 07:55:56.655926    3504 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:55:56.656132    3504 kubeadm.go:319] 
	I1205 07:55:56.656232    3504 kubeadm.go:403] duration metric: took 8m5.2264324s to StartCluster
	I1205 07:55:56.656382    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:55:56.660935    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:55:56.720992    3504 cri.go:89] found id: ""
	I1205 07:55:56.720992    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.720992    3504 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:55:56.720992    3504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:55:56.726101    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:55:56.779606    3504 cri.go:89] found id: ""
	I1205 07:55:56.779629    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.779629    3504 logs.go:284] No container was found matching "etcd"
	I1205 07:55:56.779681    3504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:55:56.783808    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:55:56.856128    3504 cri.go:89] found id: ""
	I1205 07:55:56.856232    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.856232    3504 logs.go:284] No container was found matching "coredns"
	I1205 07:55:56.856262    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:55:56.860617    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:55:56.903334    3504 cri.go:89] found id: ""
	I1205 07:55:56.903419    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.903419    3504 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:55:56.903419    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:55:56.907807    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:55:56.970846    3504 cri.go:89] found id: ""
	I1205 07:55:56.970898    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.970898    3504 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:55:56.970898    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:55:56.975641    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:55:57.023174    3504 cri.go:89] found id: ""
	I1205 07:55:57.023174    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.023174    3504 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:55:57.023174    3504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:55:57.027175    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:55:57.077156    3504 cri.go:89] found id: ""
	I1205 07:55:57.077156    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.077156    3504 logs.go:284] No container was found matching "kindnet"
	I1205 07:55:57.077156    3504 logs.go:123] Gathering logs for dmesg ...
	I1205 07:55:57.077156    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:55:57.117328    3504 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:55:57.117328    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:55:57.220104    3504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:55:57.221075    3504 logs.go:123] Gathering logs for Docker ...
	I1205 07:55:57.221075    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:55:57.251103    3504 logs.go:123] Gathering logs for container status ...
	I1205 07:55:57.251103    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:55:57.303905    3504 logs.go:123] Gathering logs for kubelet ...
	I1205 07:55:57.303905    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:55:57.367440    3504 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:55:57.367440    3504 out.go:285] * 
	W1205 07:55:57.367440    3504 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.367440    3504 out.go:285] * 
	W1205 07:55:57.369216    3504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:55:57.540920    3504 out.go:203] 
	W1205 07:55:57.554724    3504 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.554966    3504 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:55:57.554966    3504 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:55:57.597149    3504 out.go:203] 
	I1205 07:55:57.892052   11048 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (11.6963092s)
	I1205 07:55:57.892052   11048 kic.go:203] duration metric: took 11.7023081s to extract preloaded images to volume ...
	I1205 07:55:57.897048   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:58.164942   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:58.141964925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:58.167943   11048 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:55:58.420951   11048 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-218000 --name calico-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-218000 --network calico-218000 --ip 192.168.112.2 --volume calico-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:55:58.027958    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 07:55:48.298452362 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 07:55:58.027958    3768 machine.go:97] duration metric: took 11.502169s to provisionDockerMachine
	I1205 07:55:58.027958    3768 client.go:176] duration metric: took 32.9839815s to LocalClient.Create
	I1205 07:55:58.027958    3768 start.go:167] duration metric: took 32.9839815s to libmachine.API.Create "kindnet-218000"
	I1205 07:55:58.027958    3768 start.go:293] postStartSetup for "kindnet-218000" (driver="docker")
	I1205 07:55:58.027958    3768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:55:58.034943    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:55:58.037943    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.099940    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:58.241956    3768 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:55:58.252942    3768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:55:58.252942    3768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:55:58.252942    3768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 07:55:58.252942    3768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 07:55:58.253944    3768 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 07:55:58.260940    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:55:58.280946    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 07:55:58.311946    3768 start.go:296] duration metric: took 283.9835ms for postStartSetup
	I1205 07:55:58.317950    3768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-218000
	I1205 07:55:58.370949    3768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\config.json ...
	I1205 07:55:58.377943    3768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:55:58.380943    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.432944    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:58.555952    3768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:55:58.565960    3768 start.go:128] duration metric: took 33.5270487s to createHost
	I1205 07:55:58.565960    3768 start.go:83] releasing machines lock for "kindnet-218000", held for 33.5270487s
	I1205 07:55:58.570942    3768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-218000
	I1205 07:55:58.629946    3768 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 07:55:58.633971    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.633971    3768 ssh_runner.go:195] Run: cat /version.json
	I1205 07:55:58.636948    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.682952    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:58.683947    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	W1205 07:55:58.820084    3768 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 07:55:58.825734    3768 ssh_runner.go:195] Run: systemctl --version
	I1205 07:55:58.841879    3768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:55:58.857807    3768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:55:58.862473    3768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:55:58.915828    3768 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:55:58.915828    3768 start.go:496] detecting cgroup driver to use...
	I1205 07:55:58.915828    3768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:55:58.915828    3768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1205 07:55:58.931822    3768 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 07:55:58.931822    3768 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 07:55:59.070307    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:55:59.096637    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	
	
	==> Docker <==
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204268162Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204356772Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204649702Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204658903Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204665404Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204692206Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204726910Z" level=info msg="Initializing buildkit"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.370721193Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379527304Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379697822Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379729725Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379786131Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:47:28 no-preload-104100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:56:02.692718   11175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:02.693959   11175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:02.694508   11175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:02.697029   11175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:02.698753   11175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.352954] CPU: 0 PID: 402357 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f017a9e7b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f017a9e7af6.
	[  +0.000001] RSP: 002b:00007ffd8f7b8740 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000004] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.670434] CPU: 1 PID: 402610 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1dbc555b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1dbc555af6.
	[  +0.000001] RSP: 002b:00007fff5c4209e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:56:02 up  3:29,  0 user,  load average: 2.86, 3.84, 3.66
	Linux no-preload-104100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:55:59 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:00 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 05 07:56:00 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:00 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:00 no-preload-104100 kubelet[11000]: E1205 07:56:00.140253   11000 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:00 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:00 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:00 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 05 07:56:00 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:00 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:00 no-preload-104100 kubelet[11025]: E1205 07:56:00.898362   11025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:00 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:00 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:01 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 05 07:56:01 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:01 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:01 no-preload-104100 kubelet[11049]: E1205 07:56:01.622532   11049 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:01 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:01 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:02 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 327.
	Dec 05 07:56:02 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:02 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:02 no-preload-104100 kubelet[11086]: E1205 07:56:02.384084   11086 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:02 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:02 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 6 (604.6555ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:56:03.584023    1800 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-104100
helpers_test.go:243: (dbg) docker inspect no-preload-104100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043",
	        "Created": "2025-12-05T07:47:18.090294673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:47:18.384905784Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hosts",
	        "LogPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043-json.log",
	        "Name": "/no-preload-104100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-104100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-104100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-104100",
	                "Source": "/var/lib/docker/volumes/no-preload-104100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-104100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-104100",
	                "name.minikube.sigs.k8s.io": "no-preload-104100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9cf4340ae5aa61b1664fdb6401e79df00ee5d95456b58c783a5450634e707fb",
	            "SandboxKey": "/var/run/docker/netns/f9cf4340ae5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60497"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60499"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60500"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-104100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "707b5f83051fc4c181f3506b97f5ea358824531428895a55938badd3159b6c9f",
	                    "EndpointID": "17b4da3586c46e948162b9510e7b2371f3a3cf1ebbe0c711b2fa91578460e0c9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-104100",
	                        "5f2a793d7573"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 6 (609.8986ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:56:04.248213    5352 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25: (1.136404s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-218000 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/kubernetes/kubelet.conf                                                           │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /var/lib/kubelet/config.yaml                                                           │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status docker --all --full --no-pager                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat docker --no-pager                                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/docker/daemon.json                                                                │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo docker system info                                                                         │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status cri-docker --all --full --no-pager                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat cri-docker --no-pager                                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cri-dockerd --version                                                                      │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status containerd --all --full --no-pager                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl cat containerd --no-pager                                                        │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo cat /etc/containerd/config.toml                                                            │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo containerd config dump                                                                     │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo systemctl status crio --all --full --no-pager                                              │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	│ ssh     │ -p auto-218000 sudo systemctl cat crio --no-pager                                                              │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ ssh     │ -p auto-218000 sudo crio config                                                                                │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ delete  │ -p auto-218000                                                                                                 │ auto-218000               │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ start   │ -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-863300                                                                                   │ kubernetes-upgrade-863300 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │ 05 Dec 25 07:55 UTC │
	│ start   │ -p calico-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker   │ calico-218000             │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:55:43
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:55:43.385785   11048 out.go:360] Setting OutFile to fd 1688 ...
	I1205 07:55:43.445538   11048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:43.445538   11048 out.go:374] Setting ErrFile to fd 840...
	I1205 07:55:43.445538   11048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:55:43.460218   11048 out.go:368] Setting JSON to false
	I1205 07:55:43.463643   11048 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12601,"bootTime":1764908742,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:55:43.463643   11048 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:55:43.467039   11048 out.go:179] * [calico-218000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:55:43.472324   11048 notify.go:221] Checking for updates...
	I1205 07:55:43.475120   11048 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:55:43.478124   11048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:55:43.480125   11048 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:55:43.483116   11048 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:55:43.485128   11048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:55:43.488117   11048 config.go:182] Loaded profile config "kindnet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:55:43.489119   11048 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:43.489119   11048 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:55:43.489119   11048 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:55:43.623399   11048 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:55:43.627393   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:43.878533   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:43.85365759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:43.883492   11048 out.go:179] * Using the docker driver based on user configuration
	I1205 07:55:41.623253    3768 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (14.452724s)
	I1205 07:55:41.623253    3768 kic.go:203] duration metric: took 14.4566514s to extract preloaded images to volume ...
	I1205 07:55:41.627859    3768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:41.863901    3768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:93 SystemTime:2025-12-05 07:55:41.838776023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:41.868259    3768 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:55:42.117545    3768 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-218000 --name kindnet-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-218000 --network kindnet-218000 --ip 192.168.94.2 --volume kindnet-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:55:43.388568    3768 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-218000 --name kindnet-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-218000 --network kindnet-218000 --ip 192.168.94.2 --volume kindnet-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b: (1.270844s)
	I1205 07:55:43.394720    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Running}}
	I1205 07:55:43.460218    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:43.516123    3768 cli_runner.go:164] Run: docker exec kindnet-218000 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:55:43.642405    3768 oci.go:144] the created container "kindnet-218000" has a running status.
	I1205 07:55:43.642405    3768 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa...
	I1205 07:55:43.880500    3768 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:55:43.953504    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:44.056802    3768 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:55:44.056802    3768 kic_runner.go:114] Args: [docker exec --privileged kindnet-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:55:43.885483   11048 start.go:309] selected driver: docker
	I1205 07:55:43.885483   11048 start.go:927] validating driver "docker" against <nil>
	I1205 07:55:43.885483   11048 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:55:43.929498   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:44.213788   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:44.194232325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:44.213788   11048 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:55:44.214786   11048 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:55:44.217784   11048 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 07:55:44.219783   11048 cni.go:84] Creating CNI manager for "calico"
	I1205 07:55:44.219783   11048 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 07:55:44.219783   11048 start.go:353] cluster config:
	{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:55:44.221785   11048 out.go:179] * Starting "calico-218000" primary control-plane node in "calico-218000" cluster
	I1205 07:55:44.225783   11048 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:55:44.227787   11048 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:55:44.231783   11048 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:55:44.231783   11048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:44.231783   11048 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1205 07:55:44.232783   11048 cache.go:65] Caching tarball of preloaded images
	I1205 07:55:44.232783   11048 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1205 07:55:44.232783   11048 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1205 07:55:44.232783   11048 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-218000\config.json ...
	I1205 07:55:44.232783   11048 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-218000\config.json: {Name:mk91c6afceb766415a42b808b03437547163f98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:55:44.319041   11048 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:55:44.319041   11048 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:55:44.319041   11048 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:55:44.319041   11048 start.go:360] acquireMachinesLock for calico-218000: {Name:mkaef444365c0a217df0cccc3ef485884ea3ee5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:55:44.319041   11048 start.go:364] duration metric: took 0s to acquireMachinesLock for "calico-218000"
	I1205 07:55:44.319041   11048 start.go:93] Provisioning new machine with config: &{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:55:44.319041   11048 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:55:44.322048   11048 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:55:44.323048   11048 start.go:159] libmachine.API.Create for "calico-218000" (driver="docker")
	I1205 07:55:44.323048   11048 client.go:173] LocalClient.Create starting
	I1205 07:55:44.323048   11048 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Decoding PEM data...
	I1205 07:55:44.324047   11048 main.go:143] libmachine: Parsing certificate...
	I1205 07:55:44.331040   11048 cli_runner.go:164] Run: docker network inspect calico-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:55:44.384044   11048 cli_runner.go:211] docker network inspect calico-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:55:44.387041   11048 network_create.go:284] running [docker network inspect calico-218000] to gather additional debugging logs...
	I1205 07:55:44.387041   11048 cli_runner.go:164] Run: docker network inspect calico-218000
	W1205 07:55:44.437033   11048 cli_runner.go:211] docker network inspect calico-218000 returned with exit code 1
	I1205 07:55:44.437033   11048 network_create.go:287] error running [docker network inspect calico-218000]: docker network inspect calico-218000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-218000 not found
	I1205 07:55:44.437033   11048 network_create.go:289] output of [docker network inspect calico-218000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-218000 not found
	
	** /stderr **
	I1205 07:55:44.440033   11048 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:55:44.518567   11048 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.549565   11048 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.581054   11048 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.612112   11048 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.644105   11048 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.676183   11048 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.707994   11048 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 07:55:44.726521   11048 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001705a40}
	I1205 07:55:44.726521   11048 network_create.go:124] attempt to create docker network calico-218000 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1205 07:55:44.730522   11048 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-218000 calico-218000
	I1205 07:55:44.872256   11048 network_create.go:108] docker network calico-218000 192.168.112.0/24 created
	I1205 07:55:44.872296   11048 kic.go:121] calculated static IP "192.168.112.2" for the "calico-218000" container
	I1205 07:55:44.887833   11048 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:55:44.947118   11048 cli_runner.go:164] Run: docker volume create calico-218000 --label name.minikube.sigs.k8s.io=calico-218000 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:55:45.020325   11048 oci.go:103] Successfully created a docker volume calico-218000
	I1205 07:55:45.024743   11048 cli_runner.go:164] Run: docker run --rm --name calico-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --entrypoint /usr/bin/test -v calico-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:55:46.189558   11048 cli_runner.go:217] Completed: docker run --rm --name calico-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --entrypoint /usr/bin/test -v calico-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.164797s)
	I1205 07:55:46.189558   11048 oci.go:107] Successfully prepared a docker volume calico-218000
	I1205 07:55:46.189558   11048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:55:46.189558   11048 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:55:46.195557   11048 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 07:55:44.191788    3768 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa...
	I1205 07:55:46.472372    3768 cli_runner.go:164] Run: docker container inspect kindnet-218000 --format={{.State.Status}}
	I1205 07:55:46.525606    3768 machine.go:94] provisionDockerMachine start ...
	I1205 07:55:46.531024    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:46.591825    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:46.606706    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:46.606706    3768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:55:46.882633    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-218000
	
	I1205 07:55:46.882633    3768 ubuntu.go:182] provisioning hostname "kindnet-218000"
	I1205 07:55:46.886539    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:46.942319    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:46.943089    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:46.943089    3768 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-218000 && echo "kindnet-218000" | sudo tee /etc/hostname
	I1205 07:55:47.144763    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-218000
	
	I1205 07:55:47.148216    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.200257    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:47.200548    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:47.200548    3768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:55:47.383155    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:55:47.383235    3768 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:55:47.383267    3768 ubuntu.go:190] setting up certificates
	I1205 07:55:47.383348    3768 provision.go:84] configureAuth start
	I1205 07:55:47.386186    3768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-218000
	I1205 07:55:47.434188    3768 provision.go:143] copyHostCerts
	I1205 07:55:47.434188    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:55:47.434188    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:55:47.434188    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:55:47.435186    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:55:47.435186    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:55:47.435186    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:55:47.436186    3768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:55:47.436186    3768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:55:47.436186    3768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:55:47.437185    3768 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-218000 san=[127.0.0.1 192.168.94.2 kindnet-218000 localhost minikube]
	I1205 07:55:47.506006    3768 provision.go:177] copyRemoteCerts
	I1205 07:55:47.510770    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:55:47.513952    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.565725    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:47.689901    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:55:47.721502    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:55:47.749769    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1205 07:55:47.778148    3768 provision.go:87] duration metric: took 394.7705ms to configureAuth
	I1205 07:55:47.778148    3768 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:55:47.778148    3768 config.go:182] Loaded profile config "kindnet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:55:47.781148    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:47.831153    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:47.832148    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:47.832148    3768 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 07:55:48.034092    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 07:55:48.034092    3768 ubuntu.go:71] root file system type: overlay
	I1205 07:55:48.034092    3768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 07:55:48.038282    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:48.099329    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:48.099941    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:48.100168    3768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 07:55:48.308272    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 07:55:48.311928    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:48.367848    3768 main.go:143] libmachine: Using SSH client type: native
	I1205 07:55:48.367927    3768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61226 <nil> <nil>}
	I1205 07:55:48.367927    3768 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 07:55:56.232863    3504 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:55:56.233024    3504 kubeadm.go:319] 
	I1205 07:55:56.233374    3504 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:55:56.238199    3504 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:55:56.238199    3504 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:55:56.238199    3504 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1205 07:55:56.239229    3504 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1205 07:55:56.239418    3504 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1205 07:55:56.239951    3504 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1205 07:55:56.240038    3504 kubeadm.go:319] CONFIG_INET: enabled
	I1205 07:55:56.240149    3504 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1205 07:55:56.240305    3504 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1205 07:55:56.240900    3504 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1205 07:55:56.240989    3504 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1205 07:55:56.241160    3504 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1205 07:55:56.241262    3504 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1205 07:55:56.241353    3504 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1205 07:55:56.241527    3504 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1205 07:55:56.241709    3504 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1205 07:55:56.241841    3504 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1205 07:55:56.241965    3504 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1205 07:55:56.242178    3504 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1205 07:55:56.242300    3504 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1205 07:55:56.242449    3504 kubeadm.go:319] OS: Linux
	I1205 07:55:56.242570    3504 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:55:56.242721    3504 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:55:56.242769    3504 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:55:56.243457    3504 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:55:56.243517    3504 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:55:56.243675    3504 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:55:56.243773    3504 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:55:56.243773    3504 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:55:56.592452    3504 out.go:252]   - Generating certificates and keys ...
	I1205 07:55:56.593639    3504 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:55:56.593845    3504 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:55:56.594114    3504 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:55:56.594161    3504 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:55:56.594421    3504 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:55:56.594527    3504 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:55:56.594848    3504 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:55:56.594994    3504 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:55:56.595183    3504 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:55:56.595515    3504 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:55:56.595613    3504 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:55:56.595780    3504 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:55:56.595940    3504 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:55:56.596106    3504 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:55:56.596218    3504 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:55:56.596381    3504 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:55:56.596498    3504 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:55:56.596674    3504 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:55:56.596833    3504 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:55:56.652657    3504 out.go:252]   - Booting up control plane ...
	I1205 07:55:56.653102    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:55:56.653292    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:55:56.653474    3504 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:55:56.653708    3504 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:55:56.653923    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:55:56.654155    3504 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:55:56.654392    3504 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:55:56.654499    3504 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:55:56.654779    3504 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:55:56.655037    3504 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:55:56.655160    3504 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001446272s
	I1205 07:55:56.655263    3504 kubeadm.go:319] 
	I1205 07:55:56.655375    3504 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:55:56.655475    3504 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:55:56.655710    3504 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:55:56.655741    3504 kubeadm.go:319] 
	I1205 07:55:56.655926    3504 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:55:56.656007    3504 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:55:56.656132    3504 kubeadm.go:319] 
	I1205 07:55:56.656232    3504 kubeadm.go:403] duration metric: took 8m5.2264324s to StartCluster
	I1205 07:55:56.656382    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:55:56.660935    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:55:56.720992    3504 cri.go:89] found id: ""
	I1205 07:55:56.720992    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.720992    3504 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:55:56.720992    3504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:55:56.726101    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:55:56.779606    3504 cri.go:89] found id: ""
	I1205 07:55:56.779629    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.779629    3504 logs.go:284] No container was found matching "etcd"
	I1205 07:55:56.779681    3504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:55:56.783808    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:55:56.856128    3504 cri.go:89] found id: ""
	I1205 07:55:56.856232    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.856232    3504 logs.go:284] No container was found matching "coredns"
	I1205 07:55:56.856262    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:55:56.860617    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:55:56.903334    3504 cri.go:89] found id: ""
	I1205 07:55:56.903419    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.903419    3504 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:55:56.903419    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:55:56.907807    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:55:56.970846    3504 cri.go:89] found id: ""
	I1205 07:55:56.970898    3504 logs.go:282] 0 containers: []
	W1205 07:55:56.970898    3504 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:55:56.970898    3504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:55:56.975641    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:55:57.023174    3504 cri.go:89] found id: ""
	I1205 07:55:57.023174    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.023174    3504 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:55:57.023174    3504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:55:57.027175    3504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:55:57.077156    3504 cri.go:89] found id: ""
	I1205 07:55:57.077156    3504 logs.go:282] 0 containers: []
	W1205 07:55:57.077156    3504 logs.go:284] No container was found matching "kindnet"
	I1205 07:55:57.077156    3504 logs.go:123] Gathering logs for dmesg ...
	I1205 07:55:57.077156    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:55:57.117328    3504 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:55:57.117328    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:55:57.220104    3504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:55:57.210538   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.211481   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.213010   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.214100   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:55:57.215335   10807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:55:57.221075    3504 logs.go:123] Gathering logs for Docker ...
	I1205 07:55:57.221075    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 07:55:57.251103    3504 logs.go:123] Gathering logs for container status ...
	I1205 07:55:57.251103    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:55:57.303905    3504 logs.go:123] Gathering logs for kubelet ...
	I1205 07:55:57.303905    3504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:55:57.367440    3504 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:55:57.367440    3504 out.go:285] * 
	W1205 07:55:57.367440    3504 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.367440    3504 out.go:285] * 
	W1205 07:55:57.369216    3504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:55:57.540920    3504 out.go:203] 
	W1205 07:55:57.554724    3504 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001446272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:55:57.554966    3504 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:55:57.554966    3504 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:55:57.597149    3504 out.go:203] 
	I1205 07:55:57.892052   11048 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (11.6963092s)
	I1205 07:55:57.892052   11048 kic.go:203] duration metric: took 11.7023081s to extract preloaded images to volume ...
	I1205 07:55:57.897048   11048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:55:58.164942   11048 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:55:58.141964925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:55:58.167943   11048 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:55:58.420951   11048 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-218000 --name calico-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-218000 --network calico-218000 --ip 192.168.112.2 --volume calico-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:55:58.027958    3768 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 07:55:48.298452362 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 07:55:58.027958    3768 machine.go:97] duration metric: took 11.502169s to provisionDockerMachine
	I1205 07:55:58.027958    3768 client.go:176] duration metric: took 32.9839815s to LocalClient.Create
	I1205 07:55:58.027958    3768 start.go:167] duration metric: took 32.9839815s to libmachine.API.Create "kindnet-218000"
	I1205 07:55:58.027958    3768 start.go:293] postStartSetup for "kindnet-218000" (driver="docker")
	I1205 07:55:58.027958    3768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:55:58.034943    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:55:58.037943    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.099940    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:58.241956    3768 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:55:58.252942    3768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:55:58.252942    3768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:55:58.252942    3768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 07:55:58.252942    3768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 07:55:58.253944    3768 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 07:55:58.260940    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:55:58.280946    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 07:55:58.311946    3768 start.go:296] duration metric: took 283.9835ms for postStartSetup
	I1205 07:55:58.317950    3768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-218000
	I1205 07:55:58.370949    3768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\config.json ...
	I1205 07:55:58.377943    3768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:55:58.380943    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.432944    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:58.555952    3768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:55:58.565960    3768 start.go:128] duration metric: took 33.5270487s to createHost
	I1205 07:55:58.565960    3768 start.go:83] releasing machines lock for "kindnet-218000", held for 33.5270487s
	I1205 07:55:58.570942    3768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-218000
	I1205 07:55:58.629946    3768 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 07:55:58.633971    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.633971    3768 ssh_runner.go:195] Run: cat /version.json
	I1205 07:55:58.636948    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:55:58.682952    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	I1205 07:55:58.683947    3768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-218000\id_rsa Username:docker}
	W1205 07:55:58.820084    3768 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 07:55:58.825734    3768 ssh_runner.go:195] Run: systemctl --version
	I1205 07:55:58.841879    3768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:55:58.857807    3768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:55:58.862473    3768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:55:58.915828    3768 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:55:58.915828    3768 start.go:496] detecting cgroup driver to use...
	I1205 07:55:58.915828    3768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:55:58.915828    3768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1205 07:55:58.931822    3768 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 07:55:58.931822    3768 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 07:55:59.070307    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:55:59.096637    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:55:59.120193    3768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:55:59.131031    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:55:59.151002    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:55:59.170010    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:55:59.191009    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:55:59.210003    3768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:55:59.229007    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:55:59.248010    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:55:59.267005    3768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:55:59.285021    3768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:55:59.304022    3768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:55:59.323029    3768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:55:59.444027    3768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:55:59.585419    3768 start.go:496] detecting cgroup driver to use...
	I1205 07:55:59.585419    3768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:55:59.591388    3768 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 07:55:59.617384    3768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:55:59.642387    3768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:55:59.759392    3768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:55:59.793108    3768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:55:59.811121    3768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:55:59.838113    3768 ssh_runner.go:195] Run: which cri-dockerd
	I1205 07:55:59.848106    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 07:55:59.863127    3768 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 07:55:59.891123    3768 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 07:56:00.096575    3768 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 07:56:00.257583    3768 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 07:56:00.257583    3768 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 07:56:00.287572    3768 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 07:56:00.311577    3768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:56:00.467582    3768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 07:56:01.489419    3768 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0218201s)
	I1205 07:56:01.494984    3768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:56:01.520638    3768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 07:56:01.546755    3768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:56:01.571757    3768 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 07:56:01.726161    3768 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 07:56:01.906914    3768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:56:02.092181    3768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 07:56:02.124175    3768 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 07:56:02.147174    3768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:56:02.324404    3768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 07:56:02.444437    3768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:56:02.465643    3768 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 07:56:02.469648    3768 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 07:56:02.477633    3768 start.go:564] Will wait 60s for crictl version
	I1205 07:56:02.481633    3768 ssh_runner.go:195] Run: which crictl
	I1205 07:56:02.492644    3768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:56:02.533648    3768 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 07:56:02.537654    3768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:56:02.580705    3768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:55:59.139004   11048 cli_runner.go:164] Run: docker container inspect calico-218000 --format={{.State.Running}}
	I1205 07:55:59.198000   11048 cli_runner.go:164] Run: docker container inspect calico-218000 --format={{.State.Status}}
	I1205 07:55:59.274003   11048 cli_runner.go:164] Run: docker exec calico-218000 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:55:59.400019   11048 oci.go:144] the created container "calico-218000" has a running status.
	I1205 07:55:59.400019   11048 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-218000\id_rsa...
	I1205 07:55:59.509772   11048 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-218000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:55:59.595399   11048 cli_runner.go:164] Run: docker container inspect calico-218000 --format={{.State.Status}}
	I1205 07:55:59.657391   11048 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:55:59.657391   11048 kic_runner.go:114] Args: [docker exec --privileged calico-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:55:59.791130   11048 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-218000\id_rsa...
	I1205 07:56:02.149170   11048 cli_runner.go:164] Run: docker container inspect calico-218000 --format={{.State.Status}}
	I1205 07:56:02.197189   11048 machine.go:94] provisionDockerMachine start ...
	I1205 07:56:02.200168   11048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-218000
	I1205 07:56:02.263387   11048 main.go:143] libmachine: Using SSH client type: native
	I1205 07:56:02.276826   11048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61254 <nil> <nil>}
	I1205 07:56:02.276826   11048 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:56:02.457652   11048 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-218000
	
	I1205 07:56:02.457652   11048 ubuntu.go:182] provisioning hostname "calico-218000"
	I1205 07:56:02.462645   11048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-218000
	I1205 07:56:02.519641   11048 main.go:143] libmachine: Using SSH client type: native
	I1205 07:56:02.520644   11048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61254 <nil> <nil>}
	I1205 07:56:02.520644   11048 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-218000 && echo "calico-218000" | sudo tee /etc/hostname
	I1205 07:56:02.713292   11048 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-218000
	
	I1205 07:56:02.716296   11048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-218000
	I1205 07:56:02.770294   11048 main.go:143] libmachine: Using SSH client type: native
	I1205 07:56:02.770294   11048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61254 <nil> <nil>}
	I1205 07:56:02.770294   11048 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:56:02.991069   11048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:56:02.991069   11048 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:56:02.991069   11048 ubuntu.go:190] setting up certificates
	I1205 07:56:02.991069   11048 provision.go:84] configureAuth start
	I1205 07:56:02.996072   11048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-218000
	I1205 07:56:03.069101   11048 provision.go:143] copyHostCerts
	I1205 07:56:03.069101   11048 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:56:03.069101   11048 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:56:03.069101   11048 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:56:03.070087   11048 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:56:03.071081   11048 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:56:03.071081   11048 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:56:03.073083   11048 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:56:03.073083   11048 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:56:03.073083   11048 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:56:03.074081   11048 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-218000 san=[127.0.0.1 192.168.112.2 calico-218000 localhost minikube]
	I1205 07:56:03.340079   11048 provision.go:177] copyRemoteCerts
	I1205 07:56:03.344083   11048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:56:03.348080   11048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-218000
	I1205 07:56:03.401304   11048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61254 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-218000\id_rsa Username:docker}
	I1205 07:56:02.626566    3768 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.0.4 ...
	I1205 07:56:02.629820    3768 cli_runner.go:164] Run: docker exec -t kindnet-218000 dig +short host.docker.internal
	I1205 07:56:02.755288    3768 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 07:56:02.759290    3768 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 07:56:02.767288    3768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:56:02.791707    3768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-218000
	I1205 07:56:02.850015    3768 kubeadm.go:884] updating cluster {Name:kindnet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:56:02.850543    3768 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:56:02.854337    3768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 07:56:02.905600    3768 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 07:56:02.905680    3768 docker.go:621] Images already preloaded, skipping extraction
	I1205 07:56:02.909343    3768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 07:56:02.953068    3768 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 07:56:02.953068    3768 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:56:02.953068    3768 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 docker true true} ...
	I1205 07:56:02.953068    3768 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-218000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1205 07:56:02.956067    3768 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 07:56:03.059081    3768 cni.go:84] Creating CNI manager for "kindnet"
	I1205 07:56:03.059081    3768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:56:03.059081    3768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-218000 NodeName:kindnet-218000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:56:03.060075    3768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-218000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:56:03.066093    3768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:56:03.081083    3768 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:56:03.086079    3768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:56:03.102082    3768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 07:56:03.126077    3768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:56:03.148070    3768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1205 07:56:03.171070    3768 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:56:03.178075    3768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:56:03.197073    3768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:56:03.336070    3768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:56:03.359100    3768 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000 for IP: 192.168.94.2
	I1205 07:56:03.359100    3768 certs.go:195] generating shared ca certs ...
	I1205 07:56:03.359100    3768 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:56:03.360074    3768 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 07:56:03.360074    3768 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 07:56:03.360074    3768 certs.go:257] generating profile certs ...
	I1205 07:56:03.361088    3768 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\client.key
	I1205 07:56:03.361088    3768 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\client.crt with IP's: []
	I1205 07:56:03.441784    3768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\client.crt ...
	I1205 07:56:03.441784    3768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\client.crt: {Name:mk9a104c71dbb9dbaf4762d7511239ddcea51472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:56:03.442833    3768 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\client.key ...
	I1205 07:56:03.442833    3768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\client.key: {Name:mk4ae878cae746d0808ab702cd7e8fe8571a6a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:56:03.443749    3768 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.key.d661777c
	I1205 07:56:03.444251    3768 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.crt.d661777c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1205 07:56:03.608859    3768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.crt.d661777c ...
	I1205 07:56:03.608859    3768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.crt.d661777c: {Name:mkc5c88f8f4c51399568bde4097dfab6304b83a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:56:03.609786    3768 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.key.d661777c ...
	I1205 07:56:03.609786    3768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.key.d661777c: {Name:mke0862e2dcf22a3fa2ffab5d05b7c20068145cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:56:03.610743    3768 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.crt.d661777c -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.crt
	I1205 07:56:03.623748    3768 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.key.d661777c -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.key
	I1205 07:56:03.624750    3768 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.key
	I1205 07:56:03.624750    3768 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.crt with IP's: []
	I1205 07:56:03.699753    3768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.crt ...
	I1205 07:56:03.699753    3768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.crt: {Name:mkc82f36c8a8c57ca1edd2e2d6bec47bba688596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:56:03.700745    3768 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.key ...
	I1205 07:56:03.700745    3768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.key: {Name:mk3a4d0d02e7b16a48a6f50ac95b02ae4e2f0381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:56:03.714745    3768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 07:56:03.714745    3768 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 07:56:03.714745    3768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 07:56:03.715754    3768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 07:56:03.715754    3768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 07:56:03.715754    3768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 07:56:03.715754    3768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 07:56:03.716756    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:56:03.745752    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:56:03.772759    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:56:03.801747    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:56:03.830756    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 07:56:03.861757    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 07:56:03.893750    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:56:03.922750    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-218000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:56:03.950753    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:56:03.979752    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 07:56:04.012752    3768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 07:56:04.041753    3768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:56:04.072475    3768 ssh_runner.go:195] Run: openssl version
	I1205 07:56:04.085723    3768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:56:04.101270    3768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	
	
	==> Docker <==
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204268162Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204356772Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204649702Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204658903Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204665404Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204692206Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204726910Z" level=info msg="Initializing buildkit"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.370721193Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379527304Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379697822Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379729725Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379786131Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:47:28 no-preload-104100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:56:05.277251   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:05.278429   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:05.279330   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:05.280837   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:56:05.282183   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.352954] CPU: 0 PID: 402357 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f017a9e7b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f017a9e7af6.
	[  +0.000001] RSP: 002b:00007ffd8f7b8740 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000004] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.670434] CPU: 1 PID: 402610 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1dbc555b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1dbc555af6.
	[  +0.000001] RSP: 002b:00007fff5c4209e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:56:05 up  3:29,  0 user,  load average: 3.19, 3.90, 3.67
	Linux no-preload-104100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:56:02 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:03 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 328.
	Dec 05 07:56:03 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:03 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:03 no-preload-104100 kubelet[11195]: E1205 07:56:03.137355   11195 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:03 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:03 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:03 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 329.
	Dec 05 07:56:03 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:03 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:03 no-preload-104100 kubelet[11224]: E1205 07:56:03.877097   11224 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:03 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:03 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:04 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 05 07:56:04 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:04 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:04 no-preload-104100 kubelet[11253]: E1205 07:56:04.619672   11253 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:04 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:04 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:56:05 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 05 07:56:05 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:05 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:56:05 no-preload-104100 kubelet[11372]: E1205 07:56:05.377113   11372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:56:05 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:56:05 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 6 (565.9615ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:56:06.204370    5948 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (5.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (110.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-104100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1205 07:56:12.936104    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:29.845938    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:29.853952    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:29.866953    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:29.889956    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:29.932944    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:30.013958    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:30.176949    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:30.499272    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:31.141005    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:32.424130    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:34.986468    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:40.108843    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:56:50.350696    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-104100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m47.8733679s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_ssh_b46bfb026038ab1b5f2bcb21638a71b5028f6c9a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-104100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-104100 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-104100 describe deploy/metrics-server -n kube-system: exit status 1 (104.7377ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-104100" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-104100 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-104100
helpers_test.go:243: (dbg) docker inspect no-preload-104100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043",
	        "Created": "2025-12-05T07:47:18.090294673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:47:18.384905784Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hosts",
	        "LogPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043-json.log",
	        "Name": "/no-preload-104100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-104100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-104100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-104100",
	                "Source": "/var/lib/docker/volumes/no-preload-104100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-104100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-104100",
	                "name.minikube.sigs.k8s.io": "no-preload-104100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9cf4340ae5aa61b1664fdb6401e79df00ee5d95456b58c783a5450634e707fb",
	            "SandboxKey": "/var/run/docker/netns/f9cf4340ae5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60497"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60499"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60500"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-104100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "707b5f83051fc4c181f3506b97f5ea358824531428895a55938badd3159b6c9f",
	                    "EndpointID": "17b4da3586c46e948162b9510e7b2371f3a3cf1ebbe0c711b2fa91578460e0c9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-104100",
	                        "5f2a793d7573"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 6 (586.9979ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:57:54.847179   11748 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25: (1.0820184s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │        PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-218000 sudo systemctl status cri-docker --all --full --no-pager                                                               │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo systemctl cat cri-docker --no-pager                                                                               │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                          │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                    │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo cri-dockerd --version                                                                                             │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo systemctl status containerd --all --full --no-pager                                                               │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo systemctl cat containerd --no-pager                                                                               │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo cat /lib/systemd/system/containerd.service                                                                        │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo cat /etc/containerd/config.toml                                                                                   │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo containerd config dump                                                                                            │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo systemctl status crio --all --full --no-pager                                                                     │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │                     │
	│ ssh     │ -p kindnet-218000 sudo systemctl cat crio --no-pager                                                                                     │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p kindnet-218000 sudo crio config                                                                                                       │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ delete  │ -p kindnet-218000                                                                                                                        │ kindnet-218000        │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p calico-218000 sudo cat /etc/nsswitch.conf                                                                                             │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p calico-218000 sudo cat /etc/hosts                                                                                                     │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p calico-218000 sudo cat /etc/resolv.conf                                                                                               │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p calico-218000 sudo crictl pods                                                                                                        │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │                     │
	│ ssh     │ -p calico-218000 sudo crictl ps --all                                                                                                    │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ start   │ -p custom-flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker │ custom-flannel-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │                     │
	│ ssh     │ -p calico-218000 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                             │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p calico-218000 sudo ip a s                                                                                                             │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p calico-218000 sudo ip r s                                                                                                             │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │ 05 Dec 25 07:57 UTC │
	│ ssh     │ -p calico-218000 sudo iptables-save                                                                                                      │ calico-218000         │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 07:57 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:57:50
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:57:50.797428   13044 out.go:360] Setting OutFile to fd 1628 ...
	I1205 07:57:50.848796   13044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:57:50.848796   13044 out.go:374] Setting ErrFile to fd 776...
	I1205 07:57:50.848796   13044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:57:50.862828   13044 out.go:368] Setting JSON to false
	I1205 07:57:50.865834   13044 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12728,"bootTime":1764908742,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:57:50.865834   13044 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:57:50.872839   13044 out.go:179] * [custom-flannel-218000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:57:50.876837   13044 notify.go:221] Checking for updates...
	I1205 07:57:50.879849   13044 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:57:50.881836   13044 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:57:50.885833   13044 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:57:50.889838   13044 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:57:50.894829   13044 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:57:50.897834   13044 config.go:182] Loaded profile config "calico-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:57:50.897834   13044 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:57:50.898833   13044 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:57:50.898833   13044 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:57:51.010831   13044 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:57:51.015131   13044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:57:51.259167   13044 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:57:51.241067508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:57:51.263158   13044 out.go:179] * Using the docker driver based on user configuration
	I1205 07:57:51.266161   13044 start.go:309] selected driver: docker
	I1205 07:57:51.266161   13044 start.go:927] validating driver "docker" against <nil>
	I1205 07:57:51.266161   13044 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:57:51.365283   13044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:57:51.599283   13044 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 07:57:51.582556472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:57:51.600292   13044 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:57:51.600292   13044 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:57:51.604279   13044 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 07:57:51.607284   13044 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1205 07:57:51.607284   13044 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1205 07:57:51.608281   13044 start.go:353] cluster config:
	{Name:custom-flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:57:51.610279   13044 out.go:179] * Starting "custom-flannel-218000" primary control-plane node in "custom-flannel-218000" cluster
	I1205 07:57:51.622301   13044 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:57:51.627280   13044 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:57:51.629297   13044 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:57:51.629297   13044 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 07:57:51.630288   13044 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1205 07:57:51.630288   13044 cache.go:65] Caching tarball of preloaded images
	I1205 07:57:51.630288   13044 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1205 07:57:51.630288   13044 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1205 07:57:51.630288   13044 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-218000\config.json ...
	I1205 07:57:51.630288   13044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-218000\config.json: {Name:mk920dfbab70e81fa036e4f23a379bfcba64d2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:57:51.725306   13044 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:57:51.725306   13044 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:57:51.725306   13044 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:57:51.725306   13044 start.go:360] acquireMachinesLock for custom-flannel-218000: {Name:mk63caf6bb4ecb3cf126aa7ecd24152f4774a914 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:57:51.725306   13044 start.go:364] duration metric: took 0s to acquireMachinesLock for "custom-flannel-218000"
	I1205 07:57:51.725306   13044 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-218000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:57:51.726310   13044 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> Docker <==
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204268162Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204356772Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204649702Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204658903Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204665404Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204692206Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.204726910Z" level=info msg="Initializing buildkit"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.370721193Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379527304Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379697822Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379729725Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:47:28 no-preload-104100 dockerd[1171]: time="2025-12-05T07:47:28.379786131Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:47:28 no-preload-104100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:47:29 no-preload-104100 cri-dockerd[1463]: time="2025-12-05T07:47:29Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:47:29 no-preload-104100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:57:55.826758   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:57:55.828227   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:57:55.829152   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:57:55.830288   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:57:55.831305   13577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 07:56] CPU: 0 PID: 404576 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fc39ad49b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fc39ad49af6.
	[  +0.000001] RSP: 002b:00007ffe23adf440 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.854289] CPU: 13 PID: 404762 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7ff3dded0b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7ff3dded0af6.
	[  +0.000002] RSP: 002b:00007ffd38175e60 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000003] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +2.639867] tmpfs: Unknown parameter 'noswap'
	[  +6.224720] tmpfs: Unknown parameter 'noswap'
	[  +2.786150] tmpfs: Unknown parameter 'noswap'
	[  +6.041475] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:57:55 up  3:31,  0 user,  load average: 4.08, 3.98, 3.73
	Linux no-preload-104100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:57:52 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:57:53 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 05 07:57:53 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:53 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:53 no-preload-104100 kubelet[13397]: E1205 07:57:53.364271   13397 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:57:53 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:57:53 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:57:54 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 05 07:57:54 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:54 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:54 no-preload-104100 kubelet[13423]: E1205 07:57:54.116238   13423 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:57:54 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:57:54 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:57:54 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 05 07:57:54 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:54 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:54 no-preload-104100 kubelet[13443]: E1205 07:57:54.817800   13443 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:57:54 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:57:54 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:57:55 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 478.
	Dec 05 07:57:55 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:55 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:57:55 no-preload-104100 kubelet[13512]: E1205 07:57:55.604510   13512 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:57:55 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:57:55 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 6 (581.1633ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:57:56.532641    6240 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (110.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (379.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-104100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-104100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m16.8945599s)

                                                
                                                
-- stdout --
	* [no-preload-104100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "no-preload-104100" primary control-plane node in "no-preload-104100" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:58:02.352230    4560 out.go:360] Setting OutFile to fd 1232 ...
	I1205 07:58:02.403243    4560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:58:02.403243    4560 out.go:374] Setting ErrFile to fd 1536...
	I1205 07:58:02.403243    4560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:58:02.417233    4560 out.go:368] Setting JSON to false
	I1205 07:58:02.419229    4560 start.go:133] hostinfo: {"hostname":"minikube4","uptime":12740,"bootTime":1764908742,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 07:58:02.419229    4560 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 07:58:02.424241    4560 out.go:179] * [no-preload-104100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 07:58:02.430236    4560 notify.go:221] Checking for updates...
	I1205 07:58:02.434235    4560 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:58:02.446392    4560 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:58:02.451695    4560 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 07:58:02.456849    4560 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:58:02.462321    4560 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:58:02.468375    4560 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:58:02.469056    4560 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:58:02.591298    4560 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 07:58:02.594299    4560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:58:02.838946    4560 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:94 SystemTime:2025-12-05 07:58:02.820193395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:58:02.844946    4560 out.go:179] * Using the docker driver based on existing profile
	I1205 07:58:02.848946    4560 start.go:309] selected driver: docker
	I1205 07:58:02.848946    4560 start.go:927] validating driver "docker" against &{Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:58:02.848946    4560 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:58:02.892119    4560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:58:03.125867    4560 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:94 SystemTime:2025-12-05 07:58:03.107570082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 07:58:03.126869    4560 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:58:03.126869    4560 cni.go:84] Creating CNI manager for ""
	I1205 07:58:03.126869    4560 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:58:03.126869    4560 start.go:353] cluster config:
	{Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:58:03.130870    4560 out.go:179] * Starting "no-preload-104100" primary control-plane node in "no-preload-104100" cluster
	I1205 07:58:03.132868    4560 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 07:58:03.137870    4560 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:58:03.141873    4560 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:58:03.141873    4560 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:58:03.141873    4560 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\config.json ...
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 07:58:03.141873    4560 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 07:58:03.386242    4560 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:58:03.386242    4560 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:58:03.386242    4560 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:58:03.386242    4560 start.go:360] acquireMachinesLock for no-preload-104100: {Name:mk6569d967c60dcd29e05d158ce4a7a18e59aa2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:03.386242    4560 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-104100"
	I1205 07:58:03.386242    4560 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:58:03.386242    4560 fix.go:54] fixHost starting: 
	I1205 07:58:03.401074    4560 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:58:03.528220    4560 fix.go:112] recreateIfNeeded on no-preload-104100: state=Stopped err=<nil>
	W1205 07:58:03.528220    4560 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:58:03.533225    4560 out.go:252] * Restarting existing docker container for "no-preload-104100" ...
	I1205 07:58:03.538225    4560 cli_runner.go:164] Run: docker start no-preload-104100
	I1205 07:58:06.273791    4560 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.273791    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 07:58:06.274797    4560 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.1328749s
	I1205 07:58:06.274797    4560 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 07:58:06.274797    4560 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.274797    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 07:58:06.274797    4560 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.274797    4560 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.1328749s
	I1205 07:58:06.274797    4560 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 07:58:06.274797    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 07:58:06.275807    4560 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.133885s
	I1205 07:58:06.275807    4560 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:58:06.294795    4560 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.295791    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:58:06.295791    4560 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.152865s
	I1205 07:58:06.295791    4560 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:58:06.296795    4560 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.296795    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 07:58:06.296795    4560 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.1548722s
	I1205 07:58:06.296795    4560 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 07:58:06.318725    4560 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.318725    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 07:58:06.319310    4560 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.1773315s
	I1205 07:58:06.319362    4560 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 07:58:06.339946    4560 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.339946    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:58:06.340963    4560 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.1990398s
	I1205 07:58:06.340963    4560 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:58:06.362458    4560 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:58:06.362458    4560 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:58:06.362458    4560 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2205339s
	I1205 07:58:06.362458    4560 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:58:06.362458    4560 cache.go:87] Successfully saved all images to host disk.
	I1205 07:58:06.779785    4560 cli_runner.go:217] Completed: docker start no-preload-104100: (3.2415088s)
	I1205 07:58:06.786794    4560 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:58:06.871546    4560 kic.go:430] container "no-preload-104100" state is running.
	I1205 07:58:06.879541    4560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-104100
	I1205 07:58:06.950559    4560 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\config.json ...
	I1205 07:58:06.952553    4560 machine.go:94] provisionDockerMachine start ...
	I1205 07:58:06.957547    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:07.017564    4560 main.go:143] libmachine: Using SSH client type: native
	I1205 07:58:07.017564    4560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61566 <nil> <nil>}
	I1205 07:58:07.017564    4560 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:58:07.019541    4560 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 07:58:10.188392    4560 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-104100
	
	I1205 07:58:10.188392    4560 ubuntu.go:182] provisioning hostname "no-preload-104100"
	I1205 07:58:10.193382    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:10.264386    4560 main.go:143] libmachine: Using SSH client type: native
	I1205 07:58:10.264386    4560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61566 <nil> <nil>}
	I1205 07:58:10.264386    4560 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-104100 && echo "no-preload-104100" | sudo tee /etc/hostname
	I1205 07:58:10.485378    4560 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-104100
	
	I1205 07:58:10.489377    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:10.546388    4560 main.go:143] libmachine: Using SSH client type: native
	I1205 07:58:10.547381    4560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61566 <nil> <nil>}
	I1205 07:58:10.547381    4560 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-104100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-104100/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-104100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:58:10.743076    4560 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:58:10.743076    4560 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 07:58:10.743603    4560 ubuntu.go:190] setting up certificates
	I1205 07:58:10.743669    4560 provision.go:84] configureAuth start
	I1205 07:58:10.749842    4560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-104100
	I1205 07:58:10.817953    4560 provision.go:143] copyHostCerts
	I1205 07:58:10.817953    4560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 07:58:10.817953    4560 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 07:58:10.817953    4560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 07:58:10.818953    4560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 07:58:10.818953    4560 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 07:58:10.819952    4560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 07:58:10.819952    4560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 07:58:10.819952    4560 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 07:58:10.820952    4560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 07:58:10.820952    4560 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-104100 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-104100]
	I1205 07:58:10.875956    4560 provision.go:177] copyRemoteCerts
	I1205 07:58:10.880956    4560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:58:10.883951    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:10.946096    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:58:11.073101    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:58:11.104106    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:58:11.132108    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:58:11.162115    4560 provision.go:87] duration metric: took 418.4391ms to configureAuth
	I1205 07:58:11.162115    4560 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:58:11.162115    4560 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:58:11.166111    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:11.221102    4560 main.go:143] libmachine: Using SSH client type: native
	I1205 07:58:11.221102    4560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61566 <nil> <nil>}
	I1205 07:58:11.221102    4560 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 07:58:11.397619    4560 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 07:58:11.397619    4560 ubuntu.go:71] root file system type: overlay
	I1205 07:58:11.397619    4560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 07:58:11.401619    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:11.456624    4560 main.go:143] libmachine: Using SSH client type: native
	I1205 07:58:11.457623    4560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61566 <nil> <nil>}
	I1205 07:58:11.457623    4560 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 07:58:11.652062    4560 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 07:58:11.657058    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:11.723060    4560 main.go:143] libmachine: Using SSH client type: native
	I1205 07:58:11.724064    4560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 61566 <nil> <nil>}
	I1205 07:58:11.724064    4560 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 07:58:11.906145    4560 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:58:11.906145    4560 machine.go:97] duration metric: took 4.9535127s to provisionDockerMachine
	I1205 07:58:11.906145    4560 start.go:293] postStartSetup for "no-preload-104100" (driver="docker")
	I1205 07:58:11.906145    4560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:58:11.911247    4560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:58:11.915634    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:11.970470    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:58:12.101462    4560 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:58:12.109463    4560 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:58:12.109463    4560 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:58:12.109463    4560 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 07:58:12.109463    4560 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 07:58:12.110462    4560 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 07:58:12.115467    4560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:58:12.127461    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 07:58:12.156469    4560 start.go:296] duration metric: took 250.3199ms for postStartSetup
	I1205 07:58:12.161475    4560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:58:12.164462    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:12.213478    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:58:12.343084    4560 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:58:12.351086    4560 fix.go:56] duration metric: took 8.9647016s for fixHost
	I1205 07:58:12.351086    4560 start.go:83] releasing machines lock for "no-preload-104100", held for 8.9647016s
	I1205 07:58:12.355091    4560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-104100
	I1205 07:58:12.407088    4560 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 07:58:12.411084    4560 ssh_runner.go:195] Run: cat /version.json
	I1205 07:58:12.411084    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:12.414079    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:12.465080    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:58:12.479088    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	W1205 07:58:12.575495    4560 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 07:58:12.610494    4560 ssh_runner.go:195] Run: systemctl --version
	I1205 07:58:12.624493    4560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:58:12.632491    4560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:58:12.637491    4560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:58:12.652496    4560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:58:12.652496    4560 start.go:496] detecting cgroup driver to use...
	I1205 07:58:12.652496    4560 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:58:12.652496    4560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1205 07:58:12.683489    4560 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 07:58:12.683489    4560 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 07:58:12.686494    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:58:12.708505    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:58:12.722500    4560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:58:12.726487    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:58:12.745497    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:58:12.764500    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:58:12.784495    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:58:12.810500    4560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:58:12.830494    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:58:12.851506    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:58:12.871509    4560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:58:12.891500    4560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:58:12.908498    4560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:58:12.925495    4560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:58:13.034087    4560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:58:13.154914    4560 start.go:496] detecting cgroup driver to use...
	I1205 07:58:13.154914    4560 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:58:13.158910    4560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 07:58:13.183908    4560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:58:13.206923    4560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:58:13.285246    4560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:58:13.311727    4560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:58:13.335075    4560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:58:13.361079    4560 ssh_runner.go:195] Run: which cri-dockerd
	I1205 07:58:13.373076    4560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 07:58:13.387079    4560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 07:58:13.411079    4560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 07:58:13.576724    4560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 07:58:13.738837    4560 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 07:58:13.738837    4560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 07:58:13.762832    4560 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 07:58:13.785825    4560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:58:13.947612    4560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 07:58:15.000921    4560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0532926s)
	I1205 07:58:15.003908    4560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:58:15.027915    4560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 07:58:15.050917    4560 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 07:58:15.075911    4560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:58:15.100909    4560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 07:58:15.269150    4560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 07:58:15.419329    4560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:58:15.586019    4560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 07:58:15.611043    4560 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 07:58:15.632024    4560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:58:15.764541    4560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 07:58:15.885940    4560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 07:58:15.904206    4560 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 07:58:15.908345    4560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 07:58:15.917402    4560 start.go:564] Will wait 60s for crictl version
	I1205 07:58:15.921400    4560 ssh_runner.go:195] Run: which crictl
	I1205 07:58:15.932064    4560 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:58:15.982137    4560 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 07:58:15.985835    4560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:58:16.027824    4560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 07:58:16.075368    4560 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 07:58:16.078368    4560 cli_runner.go:164] Run: docker exec -t no-preload-104100 dig +short host.docker.internal
	I1205 07:58:16.306367    4560 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 07:58:16.310369    4560 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 07:58:16.316358    4560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:58:16.336348    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:16.390613    4560 kubeadm.go:884] updating cluster {Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:58:16.390613    4560 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 07:58:16.393622    4560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 07:58:16.426290    4560 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 07:58:16.426879    4560 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:58:16.426879    4560 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 07:58:16.426879    4560 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-104100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:58:16.429880    4560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 07:58:16.509886    4560 cni.go:84] Creating CNI manager for ""
	I1205 07:58:16.509886    4560 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 07:58:16.509886    4560 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:58:16.509886    4560 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-104100 NodeName:no-preload-104100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:58:16.509886    4560 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-104100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:58:16.513887    4560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:58:16.528873    4560 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:58:16.533874    4560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:58:16.549894    4560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1205 07:58:16.571885    4560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:58:16.593891    4560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1205 07:58:16.618875    4560 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:58:16.625878    4560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:58:16.652254    4560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:58:16.769335    4560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:58:16.794862    4560 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100 for IP: 192.168.103.2
	I1205 07:58:16.794862    4560 certs.go:195] generating shared ca certs ...
	I1205 07:58:16.794862    4560 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:58:16.795709    4560 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 07:58:16.795709    4560 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 07:58:16.795709    4560 certs.go:257] generating profile certs ...
	I1205 07:58:16.796529    4560 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\client.key
	I1205 07:58:16.796529    4560 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key.f2627f70
	I1205 07:58:16.797150    4560 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.key
	I1205 07:58:16.798006    4560 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 07:58:16.798006    4560 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 07:58:16.798006    4560 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 07:58:16.798583    4560 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 07:58:16.798904    4560 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 07:58:16.799195    4560 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 07:58:16.799772    4560 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 07:58:16.800862    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:58:16.842475    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:58:16.877570    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:58:16.910055    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 07:58:16.942140    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:58:16.972011    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:58:17.000743    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:58:17.028343    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-104100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:58:17.060179    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 07:58:17.092851    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 07:58:17.125057    4560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:58:17.156775    4560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:58:17.183765    4560 ssh_runner.go:195] Run: openssl version
	I1205 07:58:17.198233    4560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 07:58:17.220296    4560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 07:58:17.237241    4560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 07:58:17.244237    4560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 07:58:17.248242    4560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 07:58:17.298036    4560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:58:17.434786    4560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 07:58:17.531961    4560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 07:58:17.549572    4560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 07:58:17.558891    4560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 07:58:17.563369    4560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 07:58:17.612727    4560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:58:17.629660    4560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:58:17.647206    4560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:58:17.664375    4560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:58:17.671623    4560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:58:17.675850    4560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:58:17.726304    4560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:58:17.744457    4560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:58:17.765494    4560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:58:17.824747    4560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:58:17.889900    4560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:58:17.951151    4560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:58:18.005894    4560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:58:18.062640    4560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:58:18.107330    4560 kubeadm.go:401] StartCluster: {Name:no-preload-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-104100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:58:18.111326    4560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 07:58:18.144083    4560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:58:18.156745    4560 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:58:18.156745    4560 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:58:18.161337    4560 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:58:18.175370    4560 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:58:18.180231    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:18.234986    4560 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-104100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:58:18.234986    4560 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-104100" cluster setting kubeconfig missing "no-preload-104100" context setting]
	I1205 07:58:18.236255    4560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:58:18.258690    4560 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:58:18.397741    4560 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 07:58:18.397837    4560 kubeadm.go:602] duration metric: took 241.0886ms to restartPrimaryControlPlane
	I1205 07:58:18.397837    4560 kubeadm.go:403] duration metric: took 290.5024ms to StartCluster
	I1205 07:58:18.397872    4560 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:58:18.398016    4560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 07:58:18.399443    4560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:58:18.400426    4560 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 07:58:18.400387    4560 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:58:18.400586    4560 addons.go:70] Setting storage-provisioner=true in profile "no-preload-104100"
	I1205 07:58:18.400586    4560 addons.go:70] Setting default-storageclass=true in profile "no-preload-104100"
	I1205 07:58:18.400790    4560 addons.go:239] Setting addon storage-provisioner=true in "no-preload-104100"
	I1205 07:58:18.400586    4560 addons.go:70] Setting dashboard=true in profile "no-preload-104100"
	I1205 07:58:18.400790    4560 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-104100"
	I1205 07:58:18.400850    4560 addons.go:239] Setting addon dashboard=true in "no-preload-104100"
	I1205 07:58:18.400850    4560 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 07:58:18.400850    4560 host.go:66] Checking if "no-preload-104100" exists ...
	W1205 07:58:18.400850    4560 addons.go:248] addon dashboard should already be in state true
	I1205 07:58:18.401047    4560 host.go:66] Checking if "no-preload-104100" exists ...
	I1205 07:58:18.402982    4560 out.go:179] * Verifying Kubernetes components...
	I1205 07:58:18.411041    4560 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:58:18.411041    4560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:58:18.412044    4560 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:58:18.412044    4560 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:58:18.474898    4560 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:58:18.474898    4560 addons.go:239] Setting addon default-storageclass=true in "no-preload-104100"
	I1205 07:58:18.475887    4560 host.go:66] Checking if "no-preload-104100" exists ...
	I1205 07:58:18.480879    4560 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:58:18.480879    4560 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:58:18.482879    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:58:18.482879    4560 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:58:18.484877    4560 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:58:18.484877    4560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:58:18.486875    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:18.486875    4560 cli_runner.go:164] Run: docker container inspect no-preload-104100 --format={{.State.Status}}
	I1205 07:58:18.487874    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:18.543874    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:58:18.543874    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:58:18.544874    4560 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:58:18.544874    4560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:58:18.548875    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:18.607061    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-104100\id_rsa Username:docker}
	I1205 07:58:18.608039    4560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:58:18.692830    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:58:18.692830    4560 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:58:18.696828    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:58:18.712829    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:58:18.712829    4560 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:58:18.732829    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:58:18.732829    4560 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:58:18.765983    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:58:18.766029    4560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:58:18.774015    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:58:18.796733    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:58:18.796733    4560 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:58:18.851443    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:58:18.851443    4560 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:58:18.993978    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:58:18.994050    4560 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:58:19.018828    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:58:19.018883    4560 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1205 07:58:19.026602    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:19.026602    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.026602    4560 retry.go:31] will retry after 295.857417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.027608    4560 retry.go:31] will retry after 267.841819ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.032009    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-104100
	I1205 07:58:19.053124    4560 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:58:19.053124    4560 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:58:19.095304    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:58:19.097697    4560 node_ready.go:35] waiting up to 6m0s for node "no-preload-104100" to be "Ready" ...
	W1205 07:58:19.204769    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.204769    4560 retry.go:31] will retry after 354.799935ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.301653    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:58:19.330408    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:19.417153    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.417153    4560 retry.go:31] will retry after 345.707429ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:19.458896    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.458896    4560 retry.go:31] will retry after 350.563323ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.564408    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:19.653393    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.653393    4560 retry.go:31] will retry after 221.665486ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.768387    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:58:19.816449    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:58:19.881708    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:19.884703    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.884703    4560 retry.go:31] will retry after 498.763773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:19.990163    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:19.990271    4560 retry.go:31] will retry after 589.065808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:20.084170    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:20.085173    4560 retry.go:31] will retry after 651.78389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:20.390315    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:20.489726    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:20.490506    4560 retry.go:31] will retry after 1.068344539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:20.584159    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:20.691800    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:20.692327    4560 retry.go:31] will retry after 702.482109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:20.742901    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:20.829311    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:20.829311    4560 retry.go:31] will retry after 532.026326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:21.367180    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:58:21.399536    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:21.454790    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:21.454790    4560 retry.go:31] will retry after 936.575027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:21.482511    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:21.482511    4560 retry.go:31] will retry after 749.578838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:21.565170    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:21.644616    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:21.644616    4560 retry.go:31] will retry after 1.446693937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:22.237657    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:22.331995    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:22.332144    4560 retry.go:31] will retry after 1.502798757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:22.396659    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:22.487568    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:22.487613    4560 retry.go:31] will retry after 2.230051259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:23.096492    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:23.223399    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:23.223399    4560 retry.go:31] will retry after 2.01565605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:23.839955    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:23.922460    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:23.922460    4560 retry.go:31] will retry after 1.785763358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:24.722243    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:24.815068    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:24.815157    4560 retry.go:31] will retry after 2.636388395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:25.244753    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:25.326929    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:25.326929    4560 retry.go:31] will retry after 1.665167424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:25.714437    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:25.791284    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:25.791284    4560 retry.go:31] will retry after 5.684179428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:26.997158    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:27.085552    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:27.085552    4560 retry.go:31] will retry after 5.208348353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:27.457453    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:27.548030    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:27.548030    4560 retry.go:31] will retry after 3.199574505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:29.132634    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 07:58:30.752382    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:30.834576    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:30.834576    4560 retry.go:31] will retry after 6.40444211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:31.480904    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:31.562429    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:31.562429    4560 retry.go:31] will retry after 3.902148803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:32.298651    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:32.422333    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:32.422449    4560 retry.go:31] will retry after 8.425260097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:35.473046    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:35.564574    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:35.564574    4560 retry.go:31] will retry after 9.632260478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:37.242635    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:37.354786    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:37.354786    4560 retry.go:31] will retry after 5.849630928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:39.171141    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 07:58:40.852239    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:40.931291    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:40.931391    4560 retry.go:31] will retry after 5.860167591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:43.209972    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:43.298906    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:43.298906    4560 retry.go:31] will retry after 11.858812747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:45.202200    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:58:45.284851    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:45.284906    4560 retry.go:31] will retry after 16.730873622s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:46.795130    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:46.879976    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:46.879976    4560 retry.go:31] will retry after 8.798931731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:49.205535    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 07:58:55.162457    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:58:55.286444    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:55.286444    4560 retry.go:31] will retry after 26.899385399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:55.683840    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:58:55.761409    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:58:55.761409    4560 retry.go:31] will retry after 26.718845184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:58:59.240580    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 07:59:02.020229    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:59:02.109722    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:59:02.109722    4560 retry.go:31] will retry after 24.251416133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:59:09.280236    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 07:59:19.315353    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 07:59:22.190540    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:59:22.270723    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:59:22.270799    4560 retry.go:31] will retry after 30.650067517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:59:22.486166    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:59:22.567989    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:59:22.567989    4560 retry.go:31] will retry after 22.396596302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:59:26.365994    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:59:26.453479    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:59:26.453479    4560 retry.go:31] will retry after 44.981516042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:59:29.349058    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 07:59:39.382430    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 07:59:44.970937    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:59:45.064654    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:59:45.064654    4560 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:59:49.416243    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 07:59:52.926590    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:59:53.048485    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:59:53.049097    4560 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:59:59.451221    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:00:09.487880    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:00:11.438817    4560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:00:11.520497    4560 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:00:11.521022    4560 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:00:11.744789    4560 out.go:179] * Enabled addons: 
	I1205 08:00:11.761171    4560 addons.go:530] duration metric: took 1m53.3579781s for enable addons: enabled=[]
	W1205 08:00:19.522107    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:00:29.554112    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:00:39.588786    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:00:49.622506    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:00:59.660282    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:01:09.695070    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:01:19.730135    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:01:29.766484    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:01:39.802268    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:01:49.834924    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:01:59.866100    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:02:09.901947    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:02:19.934831    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:02:29.969437    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:02:40.009310    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:02:50.043038    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:00.077696    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:10.109694    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:20.144676    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:30.179859    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:40.215563    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:50.273234    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:00.335905    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:10.375252    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:19.104488    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 08:04:19.104793    4560 node_ready.go:38] duration metric: took 6m0.001013s for node "no-preload-104100" to be "Ready" ...
	I1205 08:04:19.107356    4560 out.go:203] 
	W1205 08:04:19.110511    4560 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 08:04:19.110554    4560 out.go:285] * 
	* 
	W1205 08:04:19.112383    4560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:04:19.116573    4560 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-104100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-104100
helpers_test.go:243: (dbg) docker inspect no-preload-104100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043",
	        "Created": "2025-12-05T07:47:18.090294673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 414493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:58:06.386924979Z",
	            "FinishedAt": "2025-12-05T07:57:57.665009272Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hosts",
	        "LogPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043-json.log",
	        "Name": "/no-preload-104100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-104100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-104100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-104100",
	                "Source": "/var/lib/docker/volumes/no-preload-104100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-104100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-104100",
	                "name.minikube.sigs.k8s.io": "no-preload-104100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db4519a857b1cb5f334b0df06abf490ceaca02f8fd29297b385218566b669e33",
	            "SandboxKey": "/var/run/docker/netns/db4519a857b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61566"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61564"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61565"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-104100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "707b5f83051fc4c181f3506b97f5ea358824531428895a55938badd3159b6c9f",
	                    "EndpointID": "4524197e7adfcc8ed0cbc2de51217f52907988f5d42b7f9fdc11804701eaff4d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-104100",
	                        "5f2a793d7573"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 2 (637.5303ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25: (1.2059423s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p bridge-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker                                                                                                               │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:03 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                 │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                            │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                      │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cri-dockerd --version                                                                                                                                                                               │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                 │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl cat containerd --no-pager                                                                                                                                                                 │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                          │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /etc/containerd/config.toml                                                                                                                                                                     │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo containerd config dump                                                                                                                                                                              │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p flannel-218000 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo crio config                                                                                                                                                                                         │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ delete  │ -p flannel-218000                                                                                                                                                                                                          │ flannel-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ start   │ -p kubenet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker                                                                                                  │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:04 UTC │
	│ stop    │ -p newest-cni-042100 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:03 UTC │ 05 Dec 25 08:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-042100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:03 UTC │ 05 Dec 25 08:03 UTC │
	│ start   │ -p newest-cni-042100 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:03 UTC │                     │
	│ ssh     │ -p bridge-218000 pgrep -a kubelet                                                                                                                                                                                          │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:03 UTC │ 05 Dec 25 08:03 UTC │
	│ ssh     │ -p kubenet-218000 pgrep -a kubelet                                                                                                                                                                                         │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo cat /etc/nsswitch.conf                                                                                                                                                                               │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo cat /etc/hosts                                                                                                                                                                                       │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo cat /etc/resolv.conf                                                                                                                                                                                 │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo crictl pods                                                                                                                                                                                          │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	W1205 08:03:44.511207    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:46.513793    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	Log file created at: 2025/12/05 08:03:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:03:48.079593    6576 out.go:360] Setting OutFile to fd 1628 ...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.133685    6576 out.go:374] Setting ErrFile to fd 1512...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.149881    6576 out.go:368] Setting JSON to false
	I1205 08:03:48.152825    6576 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13085,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:03:48.152825    6576 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:03:48.159945    6576 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:03:48.164658    6576 notify.go:221] Checking for updates...
	I1205 08:03:48.167308    6576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:48.170547    6576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:03:48.173264    6576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:03:48.177277    6576 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:03:48.179134    6576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:03:48.182963    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:48.184223    6576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:03:48.306826    6576 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:03:48.310816    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.562528    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.540004205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.565521    6576 out.go:179] * Using the docker driver based on existing profile
	I1205 08:03:48.568528    6576 start.go:309] selected driver: docker
	I1205 08:03:48.568528    6576 start.go:927] validating driver "docker" against &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.568528    6576 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:03:48.621627    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.870676    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.852383077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.870676    6576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 08:03:48.870676    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:03:48.871676    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:03:48.871676    6576 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.874674    6576 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 08:03:48.876674    6576 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:03:48.879674    6576 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:03:48.881674    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:03:48.881674    6576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 08:03:48.924123    6576 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:48.965045    6576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:03:48.965045    6576 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 08:03:49.173795    6576 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:49.174041    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 08:03:49.174210    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 08:03:49.176070    6576 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:03:49.176070    6576 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:49.176610    6576 start.go:364] duration metric: took 539.2µs to acquireMachinesLock for "newest-cni-042100"
	I1205 08:03:49.176876    6576 start.go:96] Skipping create...Using existing machine configuration
	I1205 08:03:49.176954    6576 fix.go:54] fixHost starting: 
	I1205 08:03:49.185185    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:49.467905    6576 fix.go:112] recreateIfNeeded on newest-cni-042100: state=Stopped err=<nil>
	W1205 08:03:49.468085    6576 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 08:03:46.247259    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:48.745542    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:50.273234    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:48.514113    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:50.532984    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:53.014533    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:49.492567    6576 out.go:252] * Restarting existing docker container for "newest-cni-042100" ...
	I1205 08:03:49.497575    6576 cli_runner.go:164] Run: docker start newest-cni-042100
	I1205 08:03:50.779131    6576 cli_runner.go:217] Completed: docker start newest-cni-042100: (1.2815354s)
	I1205 08:03:50.788112    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:51.139299    6576 kic.go:430] container "newest-cni-042100" state is running.
	I1205 08:03:51.164376    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:51.273747    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:51.276892    6576 machine.go:94] provisionDockerMachine start ...
	I1205 08:03:51.284394    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:51.396042    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:51.397040    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:51.397040    6576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:03:51.400042    6576 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:03:52.385305    6576 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.385658    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 08:03:52.385720    6576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.211458s
	I1205 08:03:52.385800    6576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 08:03:52.435659    6576 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.435659    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 08:03:52.435659    6576 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2613971s
	I1205 08:03:52.435659    6576 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 08:03:52.467883    6576 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.468216    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 08:03:52.468216    6576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.2939732s
	I1205 08:03:52.468216    6576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 08:03:52.472465    6576 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.472465    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 08:03:52.472465    6576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2982024s
	I1205 08:03:52.472465    6576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 08:03:52.472991    6576 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.473088    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 08:03:52.473088    6576 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2988253s
	I1205 08:03:52.473088    6576 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 08:03:52.478918    6576 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.479537    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 08:03:52.479537    6576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3052743s
	I1205 08:03:52.479537    6576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 08:03:52.488107    6576 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.489284    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 08:03:52.489284    6576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.3150206s
	I1205 08:03:52.489284    6576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 08:03:52.587256    6576 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.588098    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 08:03:52.588098    6576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.413907s
	I1205 08:03:52.588098    6576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 08:03:52.588098    6576 cache.go:87] Successfully saved all images to host disk.
	W1205 08:03:50.818460    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	I1205 08:03:53.244351    4412 pod_ready.go:94] pod "coredns-66bc5c9577-zrgxp" is "Ready"
	I1205 08:03:53.244351    4412 pod_ready.go:86] duration metric: took 21.0105368s for pod "coredns-66bc5c9577-zrgxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.250834    4412 pod_ready.go:83] waiting for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.262503    4412 pod_ready.go:94] pod "etcd-bridge-218000" is "Ready"
	I1205 08:03:53.262503    4412 pod_ready.go:86] duration metric: took 11.6685ms for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.271087    4412 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.281426    4412 pod_ready.go:94] pod "kube-apiserver-bridge-218000" is "Ready"
	I1205 08:03:53.281426    4412 pod_ready.go:86] duration metric: took 10.3388ms for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.286385    4412 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.438718    4412 pod_ready.go:94] pod "kube-controller-manager-bridge-218000" is "Ready"
	I1205 08:03:53.438718    4412 pod_ready.go:86] duration metric: took 152.3311ms for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.641268    4412 pod_ready.go:83] waiting for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.039664    4412 pod_ready.go:94] pod "kube-proxy-8r4gs" is "Ready"
	I1205 08:03:54.039664    4412 pod_ready.go:86] duration metric: took 398.3895ms for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.241161    4412 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:94] pod "kube-scheduler-bridge-218000" is "Ready"
	I1205 08:03:54.641085    4412 pod_ready.go:86] duration metric: took 399.9175ms for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:40] duration metric: took 32.4419039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:54.749081    4412 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:03:54.754768    4412 out.go:179] * Done! kubectl is now configured to use "bridge-218000" cluster and "default" namespace by default
	W1205 08:03:55.516894    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:58.012284    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:54.578463    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.578463    6576 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 08:03:54.583153    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.645702    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.646148    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.646193    6576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 08:03:54.866524    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.872867    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.933417    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.934199    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.934272    6576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:03:55.129977    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:55.129977    6576 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:03:55.129977    6576 ubuntu.go:190] setting up certificates
	I1205 08:03:55.129977    6576 provision.go:84] configureAuth start
	I1205 08:03:55.133735    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:55.190185    6576 provision.go:143] copyHostCerts
	I1205 08:03:55.190185    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:03:55.190185    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:03:55.190984    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:03:55.191986    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:03:55.191986    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:03:55.192251    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:03:55.193178    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:03:55.193178    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:03:55.193462    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:03:55.194234    6576 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 08:03:55.277216    6576 provision.go:177] copyRemoteCerts
	I1205 08:03:55.282373    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:03:55.285821    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.350220    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:55.476652    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:03:55.511250    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:03:55.546706    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 08:03:55.583614    6576 provision.go:87] duration metric: took 453.6304ms to configureAuth
	I1205 08:03:55.583614    6576 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:03:55.585275    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:55.589206    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.651189    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.652212    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.652246    6576 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:03:55.836329    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:03:55.837449    6576 ubuntu.go:71] root file system type: overlay
	I1205 08:03:55.837646    6576 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:03:55.841558    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.910453    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.911069    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.911069    6576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:03:56.123635    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:03:56.128031    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.191540    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:56.191765    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:56.191765    6576 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:03:56.396364    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:56.396364    6576 machine.go:97] duration metric: took 5.1193899s to provisionDockerMachine
	I1205 08:03:56.396364    6576 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 08:03:56.396897    6576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:03:56.402233    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:03:56.406223    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.460168    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.609105    6576 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:03:56.617925    6576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:03:56.617925    6576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:03:56.618732    6576 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:03:56.623542    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:03:56.637899    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:03:56.671787    6576 start.go:296] duration metric: took 274.8468ms for postStartSetup
	I1205 08:03:56.675921    6576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:03:56.678948    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.735289    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.884826    6576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:03:56.893835    6576 fix.go:56] duration metric: took 7.7168367s for fixHost
	I1205 08:03:56.893835    6576 start.go:83] releasing machines lock for "newest-cni-042100", held for 7.7169474s
	I1205 08:03:56.896826    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:56.959384    6576 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:03:56.965413    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.966255    6576 ssh_runner.go:195] Run: cat /version.json
	I1205 08:03:56.973872    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:57.022198    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:57.026201    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	W1205 08:03:57.148711    6576 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:03:57.162212    6576 ssh_runner.go:195] Run: systemctl --version
	I1205 08:03:57.181097    6576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:03:57.193288    6576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:03:57.197753    6576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:03:57.214357    6576 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 08:03:57.214357    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.214357    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.214357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:57.242461    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:03:57.262818    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 08:03:57.264705    6576 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:03:57.264749    6576 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:03:57.282712    6576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:03:57.286891    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:57.310466    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.333091    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:57.356105    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.377603    6576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:57.401090    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:57.423330    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:57.445407    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:57.472206    6576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:57.488210    6576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:57.505210    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:57.657790    6576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:57.802417    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.802417    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.807146    6576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:57.832467    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.857712    6576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:57.930272    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.960276    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:57.984286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:58.017277    6576 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:58.032288    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:58.048281    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:03:58.077282    6576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:58.275290    6576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:58.457293    6576 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:58.457293    6576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:58.486286    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:58.509287    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:58.648318    6576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:04:00.173930    6576 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5255881s)
	I1205 08:04:00.177929    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:04:00.201541    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:04:00.228851    6576 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 08:04:00.259044    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:00.283032    6576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:04:00.429299    6576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:04:00.593446    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.738544    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:04:00.766865    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:04:00.791407    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.930315    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:04:01.041317    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:01.059628    6576 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:04:01.064630    6576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:04:01.072635    6576 start.go:564] Will wait 60s for crictl version
	I1205 08:04:01.076636    6576 ssh_runner.go:195] Run: which crictl
	I1205 08:04:01.090615    6576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:04:01.132099    6576 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:04:01.136068    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.182106    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.227459    6576 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 08:04:01.231071    6576 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 08:04:01.375969    6576 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:04:01.379962    6576 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:04:01.387350    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.408320    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:01.468320    6576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 08:04:00.335905    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:00.512126    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:04:03.018493    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:04:01.471323    6576 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:04:01.471323    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:04:01.475324    6576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:04:01.511342    6576 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:04:01.512362    6576 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:04:01.512362    6576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 08:04:01.512362    6576 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:04:01.515327    6576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:04:01.600646    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:04:01.600646    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:04:01.600646    6576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 08:04:01.600646    6576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:04:01.600646    6576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:04:01.604645    6576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 08:04:01.617663    6576 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:04:01.621646    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:04:01.634708    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 08:04:01.659457    6576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 08:04:01.681516    6576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 08:04:01.709549    6576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:04:01.717165    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.737936    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:01.886462    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:01.908845    6576 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 08:04:01.908845    6576 certs.go:195] generating shared ca certs ...
	I1205 08:04:01.908845    6576 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:01.910250    6576 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:04:01.910428    6576 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:04:01.910428    6576 certs.go:257] generating profile certs ...
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 08:04:01.911645    6576 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 08:04:01.912393    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:04:01.912708    6576 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:04:01.912818    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:04:01.913766    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:04:01.914884    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:04:01.946745    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:04:01.978670    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:04:02.020771    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:04:02.052789    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 08:04:02.083785    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:04:02.111686    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:04:02.138106    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:04:02.167957    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:04:02.197699    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:04:02.228974    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:04:02.258542    6576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:04:02.283541    6576 ssh_runner.go:195] Run: openssl version
	I1205 08:04:02.296537    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.312534    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:04:02.327543    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.334539    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.339544    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.392223    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:04:02.408977    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.424981    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:04:02.439981    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.446982    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.451985    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.500175    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:04:02.518368    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.537597    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:04:02.555653    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.562656    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.566659    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.617005    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:04:02.635329    6576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:04:02.649383    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 08:04:02.697863    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 08:04:02.747535    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 08:04:02.802236    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 08:04:02.853222    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 08:04:02.901642    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 08:04:02.946962    6576 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:04:02.951256    6576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:04:02.986478    6576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:04:02.999955    6576 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 08:04:02.999955    6576 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 08:04:03.003999    6576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 08:04:03.019291    6576 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 08:04:03.022819    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.083372    6576 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.084185    6576 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042100" cluster setting kubeconfig missing "newest-cni-042100" context setting]
	I1205 08:04:03.084741    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.109144    6576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 08:04:03.128232    6576 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 08:04:03.138905    6576 kubeadm.go:602] duration metric: took 138.9481ms to restartPrimaryControlPlane
	I1205 08:04:03.138905    6576 kubeadm.go:403] duration metric: took 191.9404ms to StartCluster
	I1205 08:04:03.138905    6576 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.138905    6576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.141698    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.142419    6576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:04:03.142419    6576 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:04:03.142419    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:04:03.163290    6576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting dashboard=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon dashboard=true in "newest-cni-042100"
	W1205 08:04:03.163290    6576 addons.go:248] addon dashboard should already be in state true
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.192363    6576 out.go:179] * Verifying Kubernetes components...
	I1205 08:04:03.249622    6576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-042100"
	I1205 08:04:03.250609    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.257607    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.258609    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:03.261608    6576 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 08:04:03.264610    6576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:04:03.309607    6576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.309607    6576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:04:03.312609    6576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.312609    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:04:03.312609    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.315610    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.318607    6576 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 08:04:03.510751    7752 pod_ready.go:94] pod "coredns-66bc5c9577-gsfxl" is "Ready"
	I1205 08:04:03.510751    7752 pod_ready.go:86] duration metric: took 25.5102081s for pod "coredns-66bc5c9577-gsfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.517746    7752 pod_ready.go:83] waiting for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.529764    7752 pod_ready.go:94] pod "etcd-kubenet-218000" is "Ready"
	I1205 08:04:03.529764    7752 pod_ready.go:86] duration metric: took 12.0185ms for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.535749    7752 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.544756    7752 pod_ready.go:94] pod "kube-apiserver-kubenet-218000" is "Ready"
	I1205 08:04:03.544756    7752 pod_ready.go:86] duration metric: took 9.007ms for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.549745    7752 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.706418    7752 pod_ready.go:94] pod "kube-controller-manager-kubenet-218000" is "Ready"
	I1205 08:04:03.706418    7752 pod_ready.go:86] duration metric: took 156.6708ms for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.906896    7752 pod_ready.go:83] waiting for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.305526    7752 pod_ready.go:94] pod "kube-proxy-l9mnz" is "Ready"
	I1205 08:04:04.305526    7752 pod_ready.go:86] duration metric: took 398.0934ms for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.506453    7752 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:94] pod "kube-scheduler-kubenet-218000" is "Ready"
	I1205 08:04:04.908413    7752 pod_ready.go:86] duration metric: took 401.8894ms for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:40] duration metric: took 37.4190345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:04:05.004707    7752 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:04:05.007705    7752 out.go:179] * Done! kubectl is now configured to use "kubenet-218000" cluster and "default" namespace by default
	I1205 08:04:03.344609    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 08:04:03.344609    6576 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 08:04:03.353008    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.373762    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.389748    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.415749    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.454747    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:03.481745    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.544756    6576 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:04:03.550761    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:03.552751    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.556766    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 08:04:03.556766    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 08:04:03.561743    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.627813    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 08:04:03.627923    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 08:04:03.654463    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 08:04:03.654463    6576 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 08:04:03.731575    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 08:04:03.731654    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1205 08:04:03.751356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.751356    6576 retry.go:31] will retry after 148.467646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.754346    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	W1205 08:04:03.755354    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.755354    6576 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 08:04:03.755354    6576 retry.go:31] will retry after 202.130528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.774491    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 08:04:03.774491    6576 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 08:04:03.793803    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 08:04:03.793803    6576 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 08:04:03.828295    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 08:04:03.828351    6576 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 08:04:03.851355    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.851355    6576 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 08:04:03.876402    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.905217    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:03.957742    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.957742    6576 retry.go:31] will retry after 291.655688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.962256    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:03.992521    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.992521    6576 retry.go:31] will retry after 561.792628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.049441    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:04.057481    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.057556    6576 retry.go:31] will retry after 288.112081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.254701    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.343216    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.343216    6576 retry.go:31] will retry after 359.979776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.350062    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:04.431174    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.431174    6576 retry.go:31] will retry after 483.679942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.549772    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:04.559147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:04.642871    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.642871    6576 retry.go:31] will retry after 528.970083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.708123    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.787283    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.787283    6576 retry.go:31] will retry after 459.684582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.919229    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.004707    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.004707    6576 retry.go:31] will retry after 831.823948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.050298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.177969    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:05.252148    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:05.268807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.268914    6576 retry.go:31] will retry after 1.219301827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:05.381615    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.381684    6576 retry.go:31] will retry after 1.003502336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.548840    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.841493    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.945714    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.945714    6576 retry.go:31] will retry after 1.344373684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.051495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:06.390219    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:06.476859    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.476859    6576 retry.go:31] will retry after 916.677354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.493513    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:06.550586    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:06.586142    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.586142    6576 retry.go:31] will retry after 814.667109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.049968    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:07.295279    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:07.385161    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.385225    6576 retry.go:31] will retry after 2.309719888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.397737    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:07.404241    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.24760459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.229405263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.550637    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.050329    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:10.375252    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:04:08.551330    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.052416    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.549628    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.699045    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:09.722067    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:09.740066    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:09.854063    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.854063    6576 retry.go:31] will retry after 1.718952919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.926061    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.926061    6576 retry.go:31] will retry after 2.401961347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.960056    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.961057    6576 retry.go:31] will retry after 3.751594778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:10.049061    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:10.549298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.049797    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.550139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.577133    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:11.663155    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:11.663155    6576 retry.go:31] will retry after 4.120114825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.049572    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:12.333014    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:12.419653    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.419653    6576 retry.go:31] will retry after 2.740389125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.549673    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.050128    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.549901    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.717839    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:13.806807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:13.806807    6576 retry.go:31] will retry after 4.752661147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:14.050521    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:14.551720    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.050682    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.165926    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:15.256271    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.256271    6576 retry.go:31] will retry after 4.534312748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.549805    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.787818    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:15.865098    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.865628    6576 retry.go:31] will retry after 5.383695211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:16.050434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:16.549442    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.049923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.550083    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.049667    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:19.104488    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 08:04:19.104793    4560 node_ready.go:38] duration metric: took 6m0.001013s for node "no-preload-104100" to be "Ready" ...
	I1205 08:04:19.107356    4560 out.go:203] 
	W1205 08:04:19.110511    4560 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 08:04:19.110554    4560 out.go:285] * 
	W1205 08:04:19.112383    4560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:04:19.116573    4560 out.go:203] 
	
	
	==> Docker <==
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.859890520Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.859986630Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860002932Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860012733Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860021234Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860055437Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860095541Z" level=info msg="Initializing buildkit"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.987212646Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.997928393Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998072309Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998148017Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998246927Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:58:14 no-preload-104100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:58:15 no-preload-104100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:58:15 no-preload-104100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:04:21.235868    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:21.236709    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:21.239439    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:21.240954    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:21.242263    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.912373] CPU: 10 PID: 467231 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f59c4559b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f59c4559af6.
	[  +0.000001] RSP: 002b:00007fff7b401a80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.986945] CPU: 6 PID: 467375 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f68553b7b20
	[  +0.000010] Code: Unable to access opcode bytes at RIP 0x7f68553b7af6.
	[  +0.000001] RSP: 002b:00007ffe7761e510 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 08:04:21 up  3:38,  0 user,  load average: 5.29, 4.98, 4.34
	Linux no-preload-104100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:04:17 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:18 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 05 08:04:18 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:18 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:18 no-preload-104100 kubelet[8276]: E1205 08:04:18.571777    8276 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:18 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:18 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:19 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 05 08:04:19 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:19 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:19 no-preload-104100 kubelet[8289]: E1205 08:04:19.324292    8289 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:19 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:19 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:19 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 05 08:04:19 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:19 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:20 no-preload-104100 kubelet[8309]: E1205 08:04:20.092176    8309 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:20 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:20 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:20 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 05 08:04:20 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:20 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:20 no-preload-104100 kubelet[8348]: E1205 08:04:20.843022    8348 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:20 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:20 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 2 (618.9766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (379.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (115.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-042100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-042100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m52.7390843s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_2.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-042100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042100
helpers_test.go:243: (dbg) docker inspect newest-cni-042100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619",
	        "Created": "2025-12-05T07:52:58.091352749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:52:58.409795785Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hosts",
	        "LogPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619-json.log",
	        "Name": "/newest-cni-042100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-042100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042100",
	                "Source": "/var/lib/docker/volumes/newest-cni-042100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042100",
	                "name.minikube.sigs.k8s.io": "newest-cni-042100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d47f853c8e70c6c1bc253dda5cf25981c875d7148f5ef4b552fe47fc0978269",
	            "SandboxKey": "/var/run/docker/netns/5d47f853c8e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60997"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60999"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61000"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "174359b7b50b3bec7b4847d3ab43850e80d128f01a95736675cb3ceba87aab04",
	                    "EndpointID": "bfc06a82bdc1be8e4c759d8c79c5b8e1403b9190ee5a6b321c993ee5e273b5dc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042100",
	                        "ee0c9d80d83a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100: exit status 6 (599.3703ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 08:03:43.327136    9120 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25: (1.1570048s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-218000 sudo journalctl -xeu kubelet --all --full --no-pager                                                    │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /etc/kubernetes/kubelet.conf                                                                   │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /var/lib/kubelet/config.yaml                                                                   │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ delete  │ -p enable-default-cni-218000                                                                                              │ enable-default-cni-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl status docker --all --full --no-pager                                                    │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl cat docker --no-pager                                                                    │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /etc/docker/daemon.json                                                                        │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo docker system info                                                                                 │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl status cri-docker --all --full --no-pager                                                │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ start   │ -p bridge-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker              │ bridge-218000             │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p flannel-218000 sudo systemctl cat cri-docker --no-pager                                                                │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                           │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /usr/lib/systemd/system/cri-docker.service                                                     │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cri-dockerd --version                                                                              │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl status containerd --all --full --no-pager                                                │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl cat containerd --no-pager                                                                │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /lib/systemd/system/containerd.service                                                         │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo cat /etc/containerd/config.toml                                                                    │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo containerd config dump                                                                             │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo systemctl status crio --all --full --no-pager                                                      │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p flannel-218000 sudo systemctl cat crio --no-pager                                                                      │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                            │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p flannel-218000 sudo crio config                                                                                        │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ delete  │ -p flannel-218000                                                                                                         │ flannel-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ start   │ -p kubenet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker │ kubenet-218000            │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 08:02:38
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:02:38.071220    7752 out.go:360] Setting OutFile to fd 1660 ...
	I1205 08:02:38.119700    7752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:02:38.119700    7752 out.go:374] Setting ErrFile to fd 1184...
	I1205 08:02:38.119700    7752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:02:38.135364    7752 out.go:368] Setting JSON to false
	I1205 08:02:38.137920    7752 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13015,"bootTime":1764908742,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:02:38.137920    7752 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:02:38.144065    7752 out.go:179] * [kubenet-218000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:02:38.147742    7752 notify.go:221] Checking for updates...
	I1205 08:02:38.148826    7752 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:02:38.150960    7752 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:02:38.153599    7752 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:02:38.156744    7752 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:02:38.159037    7752 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:02:38.161691    7752 config.go:182] Loaded profile config "bridge-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:02:38.162383    7752 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:02:38.162383    7752 config.go:182] Loaded profile config "no-preload-104100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:02:38.162383    7752 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:02:38.287808    7752 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:02:38.292812    7752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:02:38.558159    7752 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:106 SystemTime:2025-12-05 08:02:38.5376614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:02:38.573147    7752 out.go:179] * Using the docker driver based on user configuration
	I1205 08:02:38.580148    7752 start.go:309] selected driver: docker
	I1205 08:02:38.580148    7752 start.go:927] validating driver "docker" against <nil>
	I1205 08:02:38.581147    7752 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:02:38.621153    7752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:02:38.877284    7752 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:02:38.855929149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:02:38.877284    7752 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 08:02:38.878281    7752 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:02:38.883277    7752 out.go:179] * Using Docker Desktop driver with root privileges
	I1205 08:02:38.885270    7752 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1205 08:02:38.886306    7752 start.go:353] cluster config:
	{Name:kubenet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:02:38.890267    7752 out.go:179] * Starting "kubenet-218000" primary control-plane node in "kubenet-218000" cluster
	I1205 08:02:38.892279    7752 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:02:38.895272    7752 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:02:38.897271    7752 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 08:02:38.897271    7752 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 08:02:38.898264    7752 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1205 08:02:38.898264    7752 cache.go:65] Caching tarball of preloaded images
	I1205 08:02:38.898264    7752 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1205 08:02:38.898264    7752 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1205 08:02:38.898264    7752 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\config.json ...
	I1205 08:02:38.898264    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\config.json: {Name:mk846eafafd52e071c693ccb218eae363ccaf090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:38.980851    7752 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:02:38.980851    7752 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 08:02:38.980851    7752 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:02:38.980851    7752 start.go:360] acquireMachinesLock for kubenet-218000: {Name:mk8e2fec9bb2b19b56461d549a195b196a68a206 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:02:38.980851    7752 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-218000"
	I1205 08:02:38.980851    7752 start.go:93] Provisioning new machine with config: &{Name:kubenet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:02:38.980851    7752 start.go:125] createHost starting for "" (driver="docker")
	I1205 08:02:36.182184    4412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (13.4580135s)
	I1205 08:02:36.182184    4412 kic.go:203] duration metric: took 13.4619981s to extract preloaded images to volume ...
	I1205 08:02:36.185961    4412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:02:36.415010    4412 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:96 SystemTime:2025-12-05 08:02:36.390616351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:02:36.420012    4412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 08:02:36.666604    4412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-218000 --name bridge-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-218000 --network bridge-218000 --ip 192.168.85.2 --volume bridge-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 08:02:38.283826    4412 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-218000 --name bridge-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-218000 --network bridge-218000 --ip 192.168.85.2 --volume bridge-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b: (1.6171958s)
	I1205 08:02:38.287808    4412 cli_runner.go:164] Run: docker container inspect bridge-218000 --format={{.State.Running}}
	I1205 08:02:38.370917    4412 cli_runner.go:164] Run: docker container inspect bridge-218000 --format={{.State.Status}}
	I1205 08:02:38.439907    4412 cli_runner.go:164] Run: docker exec bridge-218000 stat /var/lib/dpkg/alternatives/iptables
	I1205 08:02:38.564145    4412 oci.go:144] the created container "bridge-218000" has a running status.
	I1205 08:02:38.564145    4412 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa...
	I1205 08:02:38.646146    4412 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 08:02:38.741264    4412 cli_runner.go:164] Run: docker container inspect bridge-218000 --format={{.State.Status}}
	I1205 08:02:38.802261    4412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 08:02:38.802261    4412 kic_runner.go:114] Args: [docker exec --privileged bridge-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 08:02:38.947862    4412 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa...
	W1205 08:02:40.009310    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:02:38.984847    7752 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 08:02:38.984847    7752 start.go:159] libmachine.API.Create for "kubenet-218000" (driver="docker")
	I1205 08:02:38.984847    7752 client.go:173] LocalClient.Create starting
	I1205 08:02:38.984847    7752 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1205 08:02:38.985847    7752 main.go:143] libmachine: Decoding PEM data...
	I1205 08:02:38.985847    7752 main.go:143] libmachine: Parsing certificate...
	I1205 08:02:38.985847    7752 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1205 08:02:38.985847    7752 main.go:143] libmachine: Decoding PEM data...
	I1205 08:02:38.985847    7752 main.go:143] libmachine: Parsing certificate...
	I1205 08:02:38.990848    7752 cli_runner.go:164] Run: docker network inspect kubenet-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 08:02:39.052865    7752 cli_runner.go:211] docker network inspect kubenet-218000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 08:02:39.057857    7752 network_create.go:284] running [docker network inspect kubenet-218000] to gather additional debugging logs...
	I1205 08:02:39.057857    7752 cli_runner.go:164] Run: docker network inspect kubenet-218000
	W1205 08:02:39.114850    7752 cli_runner.go:211] docker network inspect kubenet-218000 returned with exit code 1
	I1205 08:02:39.114850    7752 network_create.go:287] error running [docker network inspect kubenet-218000]: docker network inspect kubenet-218000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-218000 not found
	I1205 08:02:39.114850    7752 network_create.go:289] output of [docker network inspect kubenet-218000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-218000 not found
	
	** /stderr **
	I1205 08:02:39.118850    7752 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 08:02:39.190850    7752 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:02:39.206290    7752 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:02:39.221545    7752 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:02:39.236838    7752 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:02:39.252631    7752 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 08:02:39.264879    7752 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018389f0}
	I1205 08:02:39.264879    7752 network_create.go:124] attempt to create docker network kubenet-218000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1205 08:02:39.267877    7752 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-218000 kubenet-218000
	I1205 08:02:39.618018    7752 network_create.go:108] docker network kubenet-218000 192.168.94.0/24 created
	I1205 08:02:39.618018    7752 kic.go:121] calculated static IP "192.168.94.2" for the "kubenet-218000" container
	I1205 08:02:39.634713    7752 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 08:02:39.699703    7752 cli_runner.go:164] Run: docker volume create kubenet-218000 --label name.minikube.sigs.k8s.io=kubenet-218000 --label created_by.minikube.sigs.k8s.io=true
	I1205 08:02:39.759705    7752 oci.go:103] Successfully created a docker volume kubenet-218000
	I1205 08:02:39.764715    7752 cli_runner.go:164] Run: docker run --rm --name kubenet-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-218000 --entrypoint /usr/bin/test -v kubenet-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 08:02:40.980128    7752 cli_runner.go:217] Completed: docker run --rm --name kubenet-218000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-218000 --entrypoint /usr/bin/test -v kubenet-218000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.2153941s)
	I1205 08:02:40.980128    7752 oci.go:107] Successfully prepared a docker volume kubenet-218000
	I1205 08:02:40.980128    7752 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 08:02:40.980128    7752 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 08:02:40.983706    7752 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 08:02:41.161148    4412 cli_runner.go:164] Run: docker container inspect bridge-218000 --format={{.State.Status}}
	I1205 08:02:41.218370    4412 machine.go:94] provisionDockerMachine start ...
	I1205 08:02:41.222023    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:41.283392    4412 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:41.297392    4412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62540 <nil> <nil>}
	I1205 08:02:41.297392    4412 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:02:41.675221    4412 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-218000
	
	I1205 08:02:41.675221    4412 ubuntu.go:182] provisioning hostname "bridge-218000"
	I1205 08:02:41.679991    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:41.733239    4412 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:41.733239    4412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62540 <nil> <nil>}
	I1205 08:02:41.733239    4412 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-218000 && echo "bridge-218000" | sudo tee /etc/hostname
	I1205 08:02:41.929097    4412 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-218000
	
	I1205 08:02:41.933127    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:41.990632    4412 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:41.990922    4412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62540 <nil> <nil>}
	I1205 08:02:41.990922    4412 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:02:42.185198    4412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:02:42.185198    4412 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:02:42.185198    4412 ubuntu.go:190] setting up certificates
	I1205 08:02:42.185198    4412 provision.go:84] configureAuth start
	I1205 08:02:42.188688    4412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-218000
	I1205 08:02:42.244298    4412 provision.go:143] copyHostCerts
	I1205 08:02:42.244834    4412 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:02:42.244913    4412 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:02:42.244976    4412 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:02:42.246360    4412 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:02:42.246360    4412 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:02:42.246360    4412 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:02:42.247029    4412 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:02:42.247029    4412 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:02:42.247029    4412 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:02:42.248021    4412 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-218000 san=[127.0.0.1 192.168.85.2 bridge-218000 localhost minikube]
	I1205 08:02:42.337031    4412 provision.go:177] copyRemoteCerts
	I1205 08:02:42.341035    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:02:42.343971    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:42.412273    4412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62540 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa Username:docker}
	I1205 08:02:42.543105    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:02:42.575318    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 08:02:42.604796    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:02:42.635163    4412 provision.go:87] duration metric: took 449.9579ms to configureAuth
	I1205 08:02:42.635163    4412 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:02:42.635163    4412 config.go:182] Loaded profile config "bridge-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:02:42.639680    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:42.695798    4412 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:42.696762    4412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62540 <nil> <nil>}
	I1205 08:02:42.696796    4412 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:02:42.887582    4412 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:02:42.887582    4412 ubuntu.go:71] root file system type: overlay
	I1205 08:02:42.887582    4412 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:02:42.891423    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:42.952941    4412 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:42.953336    4412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62540 <nil> <nil>}
	I1205 08:02:42.953336    4412 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:02:43.150628    4412 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:02:43.157932    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:43.212228    4412 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:43.212786    4412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62540 <nil> <nil>}
	I1205 08:02:43.212786    4412 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	W1205 08:02:50.043038    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:02:51.351684    7752 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-218000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (10.3678125s)
	I1205 08:02:51.351684    7752 kic.go:203] duration metric: took 10.3713908s to extract preloaded images to volume ...
	I1205 08:02:51.356217    7752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:02:51.592230    7752 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:02:51.573244941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:02:51.596230    7752 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 08:02:51.835243    7752 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-218000 --name kubenet-218000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-218000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-218000 --network kubenet-218000 --ip 192.168.94.2 --volume kubenet-218000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 08:02:52.512360    7752 cli_runner.go:164] Run: docker container inspect kubenet-218000 --format={{.State.Running}}
	I1205 08:02:52.580153    7752 cli_runner.go:164] Run: docker container inspect kubenet-218000 --format={{.State.Status}}
	I1205 08:02:52.641150    7752 cli_runner.go:164] Run: docker exec kubenet-218000 stat /var/lib/dpkg/alternatives/iptables
	I1205 08:02:52.748154    7752 oci.go:144] the created container "kubenet-218000" has a running status.
	I1205 08:02:52.748154    7752 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa...
	I1205 08:02:52.898499    7752 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 08:02:52.976913    7752 cli_runner.go:164] Run: docker container inspect kubenet-218000 --format={{.State.Status}}
	I1205 08:02:53.047237    7752 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 08:02:53.047237    7752 kic_runner.go:114] Args: [docker exec --privileged kubenet-218000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 08:02:51.483513    4412 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 08:02:43.137943573 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 08:02:51.483513    4412 machine.go:97] duration metric: took 10.2649798s to provisionDockerMachine
	I1205 08:02:51.483513    4412 client.go:176] duration metric: took 31.1179591s to LocalClient.Create
	I1205 08:02:51.483513    4412 start.go:167] duration metric: took 31.1179591s to libmachine.API.Create "bridge-218000"
	I1205 08:02:51.483513    4412 start.go:293] postStartSetup for "bridge-218000" (driver="docker")
	I1205 08:02:51.483513    4412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:02:51.488509    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:02:51.492503    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:51.547513    4412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62540 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa Username:docker}
	I1205 08:02:51.685245    4412 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:02:51.694244    4412 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:02:51.694244    4412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:02:51.694244    4412 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:02:51.694244    4412 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:02:51.695240    4412 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:02:51.700235    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:02:51.714232    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:02:51.743232    4412 start.go:296] duration metric: took 259.7143ms for postStartSetup
	I1205 08:02:51.748249    4412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-218000
	I1205 08:02:51.805236    4412 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\config.json ...
	I1205 08:02:51.812250    4412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:02:51.816251    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:51.869235    4412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62540 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa Username:docker}
	I1205 08:02:52.004761    4412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:02:52.017535    4412 start.go:128] duration metric: took 31.6559656s to createHost
	I1205 08:02:52.017535    4412 start.go:83] releasing machines lock for "bridge-218000", held for 31.6559656s
	I1205 08:02:52.022450    4412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-218000
	I1205 08:02:52.079469    4412 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:02:52.083576    4412 ssh_runner.go:195] Run: cat /version.json
	I1205 08:02:52.084225    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:52.087444    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:52.138531    4412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62540 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa Username:docker}
	I1205 08:02:52.140178    4412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62540 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa Username:docker}
	W1205 08:02:52.260817    4412 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:02:52.264813    4412 ssh_runner.go:195] Run: systemctl --version
	I1205 08:02:52.279811    4412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:02:52.287815    4412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:02:52.291807    4412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:02:52.349774    4412 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 08:02:52.349830    4412 start.go:496] detecting cgroup driver to use...
	I1205 08:02:52.349885    4412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:02:52.350031    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1205 08:02:52.368620    4412 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:02:52.368620    4412 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:02:52.457600    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:02:52.480795    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 08:02:52.495951    4412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:02:52.500575    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:02:52.528154    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:02:52.551156    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:02:52.572146    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:02:52.593146    4412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:02:52.612142    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:02:52.631143    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:02:52.650158    4412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:02:52.669162    4412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:02:52.687163    4412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:02:52.705147    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:02:52.857698    4412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:02:53.049220    4412 start.go:496] detecting cgroup driver to use...
	I1205 08:02:53.049220    4412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:02:53.055136    4412 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:02:53.089751    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:02:53.112763    4412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:02:53.186582    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:02:53.211577    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:02:53.233804    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:02:53.273481    4412 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:02:53.286052    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:02:53.301051    4412 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:02:53.331794    4412 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:02:53.536760    4412 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:02:53.761314    4412 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:02:53.761519    4412 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:02:53.792177    4412 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:02:53.820049    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:02:53.971144    4412 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:02:54.861867    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:02:54.885423    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:02:54.907727    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:02:54.930713    4412 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:02:55.096175    4412 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:02:55.260970    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:02:55.426249    4412 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:02:55.457081    4412 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:02:55.480776    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:02:55.649289    4412 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:02:55.767277    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:02:55.789347    4412 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:02:55.792340    4412 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:02:55.803340    4412 start.go:564] Will wait 60s for crictl version
	I1205 08:02:55.807341    4412 ssh_runner.go:195] Run: which crictl
	I1205 08:02:55.818342    4412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:02:55.866922    4412 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:02:55.870955    4412 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:02:55.918998    4412 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:02:53.168579    7752 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa...
	I1205 08:02:55.376493    7752 cli_runner.go:164] Run: docker container inspect kubenet-218000 --format={{.State.Status}}
	I1205 08:02:55.433787    7752 machine.go:94] provisionDockerMachine start ...
	I1205 08:02:55.436782    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:55.498537    7752 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:55.513471    7752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62580 <nil> <nil>}
	I1205 08:02:55.514005    7752 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:02:55.693362    7752 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-218000
	
	I1205 08:02:55.693430    7752 ubuntu.go:182] provisioning hostname "kubenet-218000"
	I1205 08:02:55.697760    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:55.755709    7752 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:55.755709    7752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62580 <nil> <nil>}
	I1205 08:02:55.755709    7752 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-218000 && echo "kubenet-218000" | sudo tee /etc/hostname
	I1205 08:02:55.945017    7752 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-218000
	
	I1205 08:02:55.949708    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:56.008253    7752 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:56.008253    7752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62580 <nil> <nil>}
	I1205 08:02:56.008253    7752 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-218000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-218000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-218000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:02:56.188983    7752 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:02:56.188983    7752 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:02:56.188983    7752 ubuntu.go:190] setting up certificates
	I1205 08:02:56.188983    7752 provision.go:84] configureAuth start
	I1205 08:02:56.194681    7752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-218000
	I1205 08:02:56.245618    7752 provision.go:143] copyHostCerts
	I1205 08:02:56.245618    7752 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:02:56.245618    7752 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:02:56.245618    7752 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:02:56.246625    7752 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:02:56.246625    7752 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:02:56.246625    7752 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:02:56.247628    7752 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:02:56.247628    7752 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:02:56.248627    7752 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:02:56.248627    7752 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-218000 san=[127.0.0.1 192.168.94.2 kubenet-218000 localhost minikube]
	I1205 08:02:56.445051    7752 provision.go:177] copyRemoteCerts
	I1205 08:02:56.449041    7752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:02:56.452041    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:56.509097    7752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62580 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa Username:docker}
	I1205 08:02:56.647889    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:02:56.676507    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:02:56.704901    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1205 08:02:56.736086    7752 provision.go:87] duration metric: took 546.5488ms to configureAuth
	I1205 08:02:56.736086    7752 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:02:56.737688    7752 config.go:182] Loaded profile config "kubenet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:02:56.742022    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:56.791839    7752 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:56.792847    7752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62580 <nil> <nil>}
	I1205 08:02:56.792847    7752 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:02:56.986622    7752 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:02:56.986622    7752 ubuntu.go:71] root file system type: overlay
	I1205 08:02:56.986622    7752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:02:56.990588    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:57.051065    7752 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:57.051639    7752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62580 <nil> <nil>}
	I1205 08:02:57.051860    7752 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:02:57.257615    7752 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:02:57.261258    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:57.321499    7752 main.go:143] libmachine: Using SSH client type: native
	I1205 08:02:57.321499    7752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62580 <nil> <nil>}
	I1205 08:02:57.321499    7752 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:02:55.963211    4412 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.0.4 ...
	I1205 08:02:55.967106    4412 cli_runner.go:164] Run: docker exec -t bridge-218000 dig +short host.docker.internal
	I1205 08:02:56.088222    4412 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:02:56.091225    4412 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:02:56.099230    4412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:02:56.117221    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:02:56.174745    4412 kubeadm.go:884] updating cluster {Name:bridge-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:02:56.174745    4412 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 08:02:56.181375    4412 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:02:56.218626    4412 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:02:56.218626    4412 docker.go:621] Images already preloaded, skipping extraction
	I1205 08:02:56.221620    4412 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:02:56.254666    4412 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:02:56.254666    4412 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:02:56.254666    4412 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 docker true true} ...
	I1205 08:02:56.254666    4412 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-218000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1205 08:02:56.257622    4412 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:02:56.337759    4412 cni.go:84] Creating CNI manager for "bridge"
	I1205 08:02:56.338756    4412 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 08:02:56.338756    4412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-218000 NodeName:bridge-218000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:02:56.338756    4412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "bridge-218000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:02:56.342762    4412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 08:02:56.355758    4412 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:02:56.358750    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:02:56.371748    4412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 08:02:56.390854    4412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 08:02:56.411222    4412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1205 08:02:56.435043    4412 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:02:56.441049    4412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:02:56.460960    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:02:56.621313    4412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:02:56.644216    4412 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000 for IP: 192.168.85.2
	I1205 08:02:56.644261    4412 certs.go:195] generating shared ca certs ...
	I1205 08:02:56.644336    4412 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:56.644582    4412 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:02:56.645582    4412 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:02:56.645805    4412 certs.go:257] generating profile certs ...
	I1205 08:02:56.646098    4412 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\client.key
	I1205 08:02:56.646098    4412 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\client.crt with IP's: []
	I1205 08:02:56.821439    4412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\client.crt ...
	I1205 08:02:56.821959    4412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\client.crt: {Name:mka72ecdb381ee4ddfd46817e37aac9bfe19165b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:56.822702    4412 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\client.key ...
	I1205 08:02:56.822702    4412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\client.key: {Name:mk5e09c1a499a428df8f4988f4e0d4fe71bbdfd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:56.823811    4412 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.key.071f8149
	I1205 08:02:56.823851    4412 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.crt.071f8149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 08:02:56.930676    4412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.crt.071f8149 ...
	I1205 08:02:56.930676    4412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.crt.071f8149: {Name:mk9e67a61c7251efdfac3ab7cdcffd267557c537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:56.931722    4412 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.key.071f8149 ...
	I1205 08:02:56.931722    4412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.key.071f8149: {Name:mka93ca10bce6966a3c8a794e84ffa2821dc48d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:56.932725    4412 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.crt.071f8149 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.crt
	I1205 08:02:56.946554    4412 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.key.071f8149 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.key
	I1205 08:02:56.947427    4412 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.key
	I1205 08:02:56.947691    4412 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.crt with IP's: []
	I1205 08:02:57.083452    4412 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.crt ...
	I1205 08:02:57.083452    4412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.crt: {Name:mk3ffec7071c0788f741d5dc4199451bce4dcfc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:57.084382    4412 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.key ...
	I1205 08:02:57.084972    4412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.key: {Name:mk200e4a4862f77346bedd2f91fce951965cf678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:02:57.100089    4412 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:02:57.100089    4412 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:02:57.100089    4412 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:02:57.100089    4412 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:02:57.101113    4412 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:02:57.101113    4412 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:02:57.101113    4412 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:02:57.103086    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:02:57.136729    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:02:57.167979    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:02:57.201532    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:02:57.232574    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 08:02:57.261857    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 08:02:57.295922    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:02:57.332949    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-218000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:02:57.362280    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:02:57.393136    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:02:57.422244    4412 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:02:57.450069    4412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:02:57.476460    4412 ssh_runner.go:195] Run: openssl version
	I1205 08:02:57.491727    4412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:02:57.518409    4412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:02:57.539049    4412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:02:57.549879    4412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:02:57.553504    4412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:02:57.602773    4412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:02:57.619908    4412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 08:02:57.641182    4412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:02:57.662406    4412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:02:57.682751    4412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:02:57.692476    4412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:02:57.697621    4412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:02:57.749213    4412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:02:57.770826    4412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8036.pem /etc/ssl/certs/51391683.0
	I1205 08:02:57.792972    4412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:02:57.814862    4412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:02:57.837080    4412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:02:57.848276    4412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:02:57.851760    4412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:02:57.906264    4412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:02:57.926149    4412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/80362.pem /etc/ssl/certs/3ec20f2e.0
	I1205 08:02:57.947050    4412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:02:57.955819    4412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 08:02:57.955819    4412 kubeadm.go:401] StartCluster: {Name:bridge-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:02:57.959519    4412 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:02:57.996861    4412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:02:58.016916    4412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 08:02:58.031658    4412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 08:02:58.035649    4412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 08:02:58.048664    4412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 08:02:58.048664    4412 kubeadm.go:158] found existing configuration files:
	
	I1205 08:02:58.052647    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 08:02:58.065650    4412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 08:02:58.069960    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 08:02:58.088414    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 08:02:58.101282    4412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 08:02:58.106889    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 08:02:58.128645    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 08:02:58.143863    4412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 08:02:58.148342    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 08:02:58.169708    4412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 08:02:58.186973    4412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 08:02:58.190799    4412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 08:02:58.208699    4412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 08:02:58.326600    4412 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 08:02:58.331657    4412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 08:02:58.431493    4412 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1205 08:03:00.077696    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:02:59.031528    7752 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-11-24 21:58:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-05 08:02:57.238359342 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1205 08:02:59.031528    7752 machine.go:97] duration metric: took 3.5976837s to provisionDockerMachine
	I1205 08:02:59.031528    7752 client.go:176] duration metric: took 20.0463608s to LocalClient.Create
	I1205 08:02:59.031528    7752 start.go:167] duration metric: took 20.0463608s to libmachine.API.Create "kubenet-218000"
	I1205 08:02:59.031528    7752 start.go:293] postStartSetup for "kubenet-218000" (driver="docker")
	I1205 08:02:59.031528    7752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:02:59.035529    7752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:02:59.038521    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:59.094413    7752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62580 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa Username:docker}
	I1205 08:02:59.233449    7752 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:02:59.246801    7752 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:02:59.246801    7752 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:02:59.246801    7752 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:02:59.247291    7752 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:02:59.247848    7752 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:02:59.253903    7752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:02:59.269804    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:02:59.298677    7752 start.go:296] duration metric: took 267.1455ms for postStartSetup
	I1205 08:02:59.304700    7752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-218000
	I1205 08:02:59.356667    7752 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\config.json ...
	I1205 08:02:59.361668    7752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:02:59.365670    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:59.425410    7752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62580 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa Username:docker}
	I1205 08:02:59.560856    7752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:02:59.574526    7752 start.go:128] duration metric: took 20.5933468s to createHost
	I1205 08:02:59.574526    7752 start.go:83] releasing machines lock for "kubenet-218000", held for 20.5933468s
	I1205 08:02:59.578604    7752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-218000
	I1205 08:02:59.634184    7752 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:02:59.637608    7752 ssh_runner.go:195] Run: cat /version.json
	I1205 08:02:59.638617    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:59.641194    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:02:59.692256    7752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62580 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa Username:docker}
	I1205 08:02:59.693419    7752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62580 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa Username:docker}
	W1205 08:02:59.814868    7752 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:02:59.828175    7752 ssh_runner.go:195] Run: systemctl --version
	I1205 08:02:59.845790    7752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:02:59.854913    7752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:02:59.859313    7752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:02:59.915622    7752 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 08:02:59.915687    7752 start.go:496] detecting cgroup driver to use...
	I1205 08:02:59.915744    7752 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:02:59.915911    7752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1205 08:02:59.922323    7752 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:02:59.922323    7752 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:02:59.945866    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:02:59.964625    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 08:02:59.982317    7752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:02:59.986824    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:00.009747    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:00.030250    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:00.054156    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:00.076246    7752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:00.100524    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:00.122909    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:00.143900    7752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:00.165708    7752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:00.191328    7752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:00.208944    7752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:00.348450    7752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:00.505430    7752 start.go:496] detecting cgroup driver to use...
	I1205 08:03:00.505474    7752 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:00.509999    7752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:00.540670    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:00.565287    7752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:00.616086    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:00.639910    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:00.658807    7752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:00.691666    7752 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:00.702760    7752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:00.719029    7752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1205 08:03:00.742203    7752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:00.906388    7752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:01.045654    7752 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:01.045654    7752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:01.073688    7752 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:01.096513    7752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:01.246412    7752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:03:02.143428    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:03:02.167612    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:03:02.194386    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:03:02.220101    7752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:03:02.377998    7752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:03:02.529994    7752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:02.668544    7752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:03:02.695841    7752 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:03:02.719312    7752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:02.876093    7752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:03:03.002330    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:03:03.024202    7752 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:03:03.029626    7752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:03:03.039612    7752 start.go:564] Will wait 60s for crictl version
	I1205 08:03:03.043385    7752 ssh_runner.go:195] Run: which crictl
	I1205 08:03:03.056206    7752 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:03:03.107885    7752 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:03:03.112008    7752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:03:03.162999    7752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:03:03.203067    7752 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.0.4 ...
	I1205 08:03:03.207068    7752 cli_runner.go:164] Run: docker exec -t kubenet-218000 dig +short host.docker.internal
	I1205 08:03:03.332072    7752 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:03:03.336948    7752 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:03:03.347358    7752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:03:03.367144    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:03:03.424603    7752 kubeadm.go:884] updating cluster {Name:kubenet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:03:03.424750    7752 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1205 08:03:03.428186    7752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:03:03.464296    7752 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:03:03.464296    7752 docker.go:621] Images already preloaded, skipping extraction
	I1205 08:03:03.468137    7752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:03:03.501764    7752 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:03:03.501764    7752 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:03:03.501764    7752 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 docker true true} ...
	I1205 08:03:03.501764    7752 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-218000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:03:03.505977    7752 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:03:03.585106    7752 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1205 08:03:03.585106    7752 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 08:03:03.585106    7752 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-218000 NodeName:kubenet-218000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:03:03.585793    7752 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-218000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:03:03.591648    7752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 08:03:03.604902    7752 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:03:03.608970    7752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:03:03.624375    7752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1205 08:03:03.646021    7752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 08:03:03.667750    7752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1205 08:03:03.696970    7752 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:03:03.704984    7752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:03:03.726975    7752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:03.877004    7752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:03:03.902501    7752 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000 for IP: 192.168.94.2
	I1205 08:03:03.902558    7752 certs.go:195] generating shared ca certs ...
	I1205 08:03:03.902594    7752 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:03.902921    7752 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:03:03.902921    7752 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:03:03.902921    7752 certs.go:257] generating profile certs ...
	I1205 08:03:03.903713    7752 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\client.key
	I1205 08:03:03.903713    7752 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\client.crt with IP's: []
	I1205 08:03:03.940608    7752 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\client.crt ...
	I1205 08:03:03.940608    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\client.crt: {Name:mk4e187b737d5987c6d01f898cf9b3c846efebad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:03.941612    7752 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\client.key ...
	I1205 08:03:03.941612    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\client.key: {Name:mk01757d33de6d9ff283431b4ad4ba35ce0a3cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:03.943156    7752 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.key.bb6a00e2
	I1205 08:03:03.943156    7752 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.crt.bb6a00e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1205 08:03:04.116722    7752 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.crt.bb6a00e2 ...
	I1205 08:03:04.116722    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.crt.bb6a00e2: {Name:mk8028558f25c71664a630f4af6be932f721c6c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:04.117723    7752 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.key.bb6a00e2 ...
	I1205 08:03:04.117723    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.key.bb6a00e2: {Name:mk7bc4d9fd0f7becd556de63ce21ddfb7da86c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:04.118952    7752 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.crt.bb6a00e2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.crt
	I1205 08:03:04.132397    7752 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.key.bb6a00e2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.key
	I1205 08:03:04.133493    7752 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.key
	I1205 08:03:04.134265    7752 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.crt with IP's: []
	I1205 08:03:04.174110    7752 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.crt ...
	I1205 08:03:04.174110    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.crt: {Name:mk269e6e47f0a58bcca6888635616740718046a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:04.175165    7752 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.key ...
	I1205 08:03:04.175165    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.key: {Name:mkcdd4439d64a9692a4ade15a13ad94383a37725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:04.188300    7752 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:03:04.189810    7752 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:03:04.189859    7752 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:03:04.189859    7752 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:03:04.189859    7752 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:03:04.190478    7752 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:03:04.191028    7752 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:03:04.193627    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:03:04.228228    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:03:04.258600    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:03:04.289922    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:03:04.317311    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 08:03:04.345302    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:03:04.376003    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:03:04.404545    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-218000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 08:03:04.434648    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:03:04.464595    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:03:04.496826    7752 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:03:04.528966    7752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:03:04.555623    7752 ssh_runner.go:195] Run: openssl version
	I1205 08:03:04.571473    7752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:03:04.588471    7752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:03:04.608358    7752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:03:04.615992    7752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:03:04.620041    7752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:03:04.672894    7752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:03:04.692100    7752 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/80362.pem /etc/ssl/certs/3ec20f2e.0
	I1205 08:03:04.709132    7752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:03:04.726422    7752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:03:04.744601    7752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:03:04.751613    7752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:03:04.755606    7752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:03:04.807184    7752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:03:04.827688    7752 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 08:03:04.843688    7752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:03:04.859693    7752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:03:04.876697    7752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:03:04.884689    7752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:03:04.887703    7752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:03:04.936606    7752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:03:04.957308    7752 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8036.pem /etc/ssl/certs/51391683.0
	I1205 08:03:04.975447    7752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:03:04.984386    7752 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 08:03:04.984386    7752 kubeadm.go:401] StartCluster: {Name:kubenet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:04.988359    7752 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:03:05.023636    7752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:03:05.045658    7752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 08:03:05.061572    7752 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 08:03:05.067745    7752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 08:03:05.080690    7752 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 08:03:05.080690    7752 kubeadm.go:158] found existing configuration files:
	
	I1205 08:03:05.084688    7752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 08:03:05.097689    7752 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 08:03:05.101687    7752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 08:03:05.117669    7752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 08:03:05.133076    7752 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 08:03:05.139149    7752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 08:03:05.163720    7752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 08:03:05.178327    7752 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 08:03:05.182318    7752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 08:03:05.203955    7752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 08:03:05.217125    7752 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 08:03:05.221419    7752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 08:03:05.245216    7752 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 08:03:05.368210    7752 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1205 08:03:05.371219    7752 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 08:03:05.468456    7752 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1205 08:03:10.109694    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:03:13.811034    4412 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 08:03:13.811034    4412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 08:03:13.811034    4412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 08:03:13.811719    4412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 08:03:13.811933    4412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 08:03:13.812068    4412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 08:03:13.817752    4412 out.go:252]   - Generating certificates and keys ...
	I1205 08:03:13.817912    4412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 08:03:13.818164    4412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 08:03:13.818344    4412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 08:03:13.818463    4412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 08:03:13.818463    4412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 08:03:13.818463    4412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 08:03:13.818463    4412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 08:03:13.818995    4412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-218000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 08:03:13.819119    4412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 08:03:13.819283    4412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-218000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 08:03:13.819283    4412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 08:03:13.819283    4412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 08:03:13.819283    4412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 08:03:13.819283    4412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 08:03:13.819916    4412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 08:03:13.819916    4412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 08:03:13.819916    4412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 08:03:13.819916    4412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 08:03:13.819916    4412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 08:03:13.820448    4412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 08:03:13.820598    4412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 08:03:13.829791    4412 out.go:252]   - Booting up control plane ...
	I1205 08:03:13.829791    4412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 08:03:13.829791    4412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 08:03:13.829791    4412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 08:03:13.829791    4412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 08:03:13.830790    4412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 08:03:13.830790    4412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 08:03:13.830790    4412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 08:03:13.830790    4412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 08:03:13.830790    4412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 08:03:13.830790    4412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 08:03:13.831793    4412 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50160401s
	I1205 08:03:13.831793    4412 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 08:03:13.831793    4412 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1205 08:03:13.831793    4412 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 08:03:13.832672    4412 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 08:03:13.832971    4412 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.333857039s
	I1205 08:03:13.833138    4412 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.252605877s
	I1205 08:03:13.833138    4412 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002834311s
	I1205 08:03:13.833768    4412 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 08:03:13.833768    4412 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 08:03:13.834459    4412 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 08:03:13.835944    4412 kubeadm.go:319] [mark-control-plane] Marking the node bridge-218000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 08:03:13.836108    4412 kubeadm.go:319] [bootstrap-token] Using token: ysoq7y.7nl1flch9he5bprm
	I1205 08:03:13.840009    4412 out.go:252]   - Configuring RBAC rules ...
	I1205 08:03:13.840009    4412 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 08:03:13.840009    4412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 08:03:13.841730    4412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 08:03:13.841730    4412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 08:03:13.841730    4412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 08:03:13.842738    4412 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 08:03:13.842937    4412 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 08:03:13.843017    4412 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 08:03:13.843224    4412 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 08:03:13.843224    4412 kubeadm.go:319] 
	I1205 08:03:13.843224    4412 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 08:03:13.843224    4412 kubeadm.go:319] 
	I1205 08:03:13.843224    4412 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 08:03:13.843224    4412 kubeadm.go:319] 
	I1205 08:03:13.843224    4412 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 08:03:13.843961    4412 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 08:03:13.844014    4412 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 08:03:13.844014    4412 kubeadm.go:319] 
	I1205 08:03:13.844014    4412 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 08:03:13.844014    4412 kubeadm.go:319] 
	I1205 08:03:13.844014    4412 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 08:03:13.844014    4412 kubeadm.go:319] 
	I1205 08:03:13.844014    4412 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 08:03:13.844014    4412 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 08:03:13.844014    4412 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 08:03:13.844014    4412 kubeadm.go:319] 
	I1205 08:03:13.844014    4412 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 08:03:13.844014    4412 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 08:03:13.844014    4412 kubeadm.go:319] 
	I1205 08:03:13.844014    4412 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ysoq7y.7nl1flch9he5bprm \
	I1205 08:03:13.844014    4412 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e \
	I1205 08:03:13.844014    4412 kubeadm.go:319] 	--control-plane 
	I1205 08:03:13.844014    4412 kubeadm.go:319] 
	I1205 08:03:13.845601    4412 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 08:03:13.845601    4412 kubeadm.go:319] 
	I1205 08:03:13.845601    4412 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ysoq7y.7nl1flch9he5bprm \
	I1205 08:03:13.845601    4412 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e 
	I1205 08:03:13.845601    4412 cni.go:84] Creating CNI manager for "bridge"
	I1205 08:03:13.848454    4412 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 08:03:13.854775    4412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 08:03:13.871202    4412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 08:03:13.891601    4412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 08:03:13.897944    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-218000 minikube.k8s.io/updated_at=2025_12_05T08_03_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=bridge-218000 minikube.k8s.io/primary=true
	I1205 08:03:13.899298    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:13.909610    4412 ops.go:34] apiserver oom_adj: -16
	I1205 08:03:14.084328    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:14.586056    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:15.085198    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:15.585008    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:16.084651    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:16.585773    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:17.085927    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:17.585657    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:18.085592    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:18.585290    4412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:18.701439    4412 kubeadm.go:1114] duration metric: took 4.8096852s to wait for elevateKubeSystemPrivileges
	I1205 08:03:18.701517    4412 kubeadm.go:403] duration metric: took 20.745304s to StartCluster
	I1205 08:03:18.701517    4412 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:18.701769    4412 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:18.703361    4412 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:18.703906    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 08:03:18.703906    4412 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:03:18.703906    4412 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:03:18.704570    4412 addons.go:70] Setting storage-provisioner=true in profile "bridge-218000"
	I1205 08:03:18.704570    4412 addons.go:70] Setting default-storageclass=true in profile "bridge-218000"
	I1205 08:03:18.704686    4412 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-218000"
	I1205 08:03:18.704744    4412 config.go:182] Loaded profile config "bridge-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:03:18.704570    4412 addons.go:239] Setting addon storage-provisioner=true in "bridge-218000"
	I1205 08:03:18.704904    4412 host.go:66] Checking if "bridge-218000" exists ...
	I1205 08:03:18.706915    4412 out.go:179] * Verifying Kubernetes components...
	I1205 08:03:18.715075    4412 cli_runner.go:164] Run: docker container inspect bridge-218000 --format={{.State.Status}}
	I1205 08:03:18.716074    4412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:18.716074    4412 cli_runner.go:164] Run: docker container inspect bridge-218000 --format={{.State.Status}}
	I1205 08:03:18.784806    4412 addons.go:239] Setting addon default-storageclass=true in "bridge-218000"
	I1205 08:03:18.784806    4412 host.go:66] Checking if "bridge-218000" exists ...
	I1205 08:03:18.791819    4412 cli_runner.go:164] Run: docker container inspect bridge-218000 --format={{.State.Status}}
	I1205 08:03:18.814815    4412 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:03:18.914813    7752 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 08:03:18.914813    7752 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 08:03:18.914813    7752 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 08:03:18.914813    7752 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 08:03:18.914813    7752 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 08:03:18.914813    7752 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 08:03:18.922807    7752 out.go:252]   - Generating certificates and keys ...
	I1205 08:03:18.922807    7752 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 08:03:18.922807    7752 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 08:03:18.923816    7752 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 08:03:18.923816    7752 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 08:03:18.923816    7752 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 08:03:18.923816    7752 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 08:03:18.923816    7752 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 08:03:18.924828    7752 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-218000 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1205 08:03:18.924828    7752 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 08:03:18.925815    7752 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-218000 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1205 08:03:18.925815    7752 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 08:03:18.926824    7752 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 08:03:18.926824    7752 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 08:03:18.926824    7752 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 08:03:18.927823    7752 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 08:03:18.927823    7752 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 08:03:18.927823    7752 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 08:03:18.927823    7752 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 08:03:18.927823    7752 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 08:03:18.928822    7752 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 08:03:18.928822    7752 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 08:03:18.931829    7752 out.go:252]   - Booting up control plane ...
	I1205 08:03:18.931829    7752 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 08:03:18.932824    7752 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 08:03:18.932824    7752 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 08:03:18.932824    7752 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 08:03:18.932824    7752 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 08:03:18.933827    7752 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 08:03:18.933827    7752 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 08:03:18.933827    7752 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 08:03:18.933827    7752 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 08:03:18.933827    7752 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 08:03:18.934833    7752 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001973259s
	I1205 08:03:18.934833    7752 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 08:03:18.934833    7752 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1205 08:03:18.934833    7752 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 08:03:18.934833    7752 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 08:03:18.935825    7752 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.444829642s
	I1205 08:03:18.935825    7752 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.06180683s
	I1205 08:03:18.935825    7752 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502179671s
	I1205 08:03:18.935825    7752 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 08:03:18.936804    7752 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 08:03:18.936804    7752 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 08:03:18.936804    7752 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-218000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 08:03:18.936804    7752 kubeadm.go:319] [bootstrap-token] Using token: 1cxax7.ns6btejt7ml94kfn
	I1205 08:03:18.817808    4412 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:03:18.817808    4412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:03:18.821813    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:03:18.852818    4412 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:03:18.852818    4412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:03:18.857806    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:03:18.882811    4412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62540 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa Username:docker}
	I1205 08:03:18.905801    4412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62540 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-218000\id_rsa Username:docker}
	I1205 08:03:19.034328    4412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 08:03:19.239545    4412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:03:19.258352    4412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:03:19.352318    4412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:03:19.928597    4412 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1205 08:03:19.936271    4412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-218000
	I1205 08:03:19.994735    4412 node_ready.go:35] waiting up to 15m0s for node "bridge-218000" to be "Ready" ...
	I1205 08:03:20.028414    4412 node_ready.go:49] node "bridge-218000" is "Ready"
	I1205 08:03:20.028414    4412 node_ready.go:38] duration metric: took 33.6787ms for node "bridge-218000" to be "Ready" ...
	I1205 08:03:20.028414    4412 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:03:20.035384    4412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:03:20.446977    4412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-218000" context rescaled to 1 replicas
	I1205 08:03:20.850239    4412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.4978593s)
	I1205 08:03:20.850239    4412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.5918088s)
	I1205 08:03:20.850368    4412 api_server.go:72] duration metric: took 2.1459061s to wait for apiserver process to appear ...
	I1205 08:03:20.850368    4412 api_server.go:88] waiting for apiserver healthz status ...
	I1205 08:03:20.850368    4412 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62539/healthz ...
	I1205 08:03:20.935101    4412 api_server.go:279] https://127.0.0.1:62539/healthz returned 200:
	ok
	I1205 08:03:20.938090    4412 api_server.go:141] control plane version: v1.34.2
	I1205 08:03:20.938090    4412 api_server.go:131] duration metric: took 87.721ms to wait for apiserver health ...
	I1205 08:03:20.938090    4412 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 08:03:20.946844    4412 system_pods.go:59] 8 kube-system pods found
	I1205 08:03:20.946844    4412 system_pods.go:61] "coredns-66bc5c9577-jnqxb" [8a2acc63-c9a3-4438-a037-7d3f180b1ca1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:20.946844    4412 system_pods.go:61] "coredns-66bc5c9577-zrgxp" [f5b4e994-7931-429c-816f-146480fba04d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:20.946844    4412 system_pods.go:61] "etcd-bridge-218000" [8f3a7f79-834e-43dd-b20d-ab638640c777] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 08:03:20.946844    4412 system_pods.go:61] "kube-apiserver-bridge-218000" [ce57bde4-8c12-43e5-b08b-e7fec39d606c] Running
	I1205 08:03:20.946844    4412 system_pods.go:61] "kube-controller-manager-bridge-218000" [8c6b84bc-04f7-47c7-8a40-31913808a09a] Running
	I1205 08:03:20.946844    4412 system_pods.go:61] "kube-proxy-8r4gs" [622d5d99-1ddc-40af-9457-a2d8381c3055] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:20.946844    4412 system_pods.go:61] "kube-scheduler-bridge-218000" [b1342bf7-8218-4837-916a-cf8d004103b0] Running
	I1205 08:03:20.946844    4412 system_pods.go:61] "storage-provisioner" [8b996d75-f7c6-407d-ad41-c38a7c9d079d] Pending
	I1205 08:03:20.946844    4412 system_pods.go:74] duration metric: took 8.7534ms to wait for pod list to return data ...
	I1205 08:03:20.946844    4412 default_sa.go:34] waiting for default service account to be created ...
	I1205 08:03:20.956135    4412 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1205 08:03:20.144676    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:03:18.940813    7752 out.go:252]   - Configuring RBAC rules ...
	I1205 08:03:18.940813    7752 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 08:03:18.940813    7752 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 08:03:18.941806    7752 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 08:03:18.941806    7752 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 08:03:18.941806    7752 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 08:03:18.942823    7752 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 08:03:18.942823    7752 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 08:03:18.942823    7752 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 08:03:18.942823    7752 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 08:03:18.942823    7752 kubeadm.go:319] 
	I1205 08:03:18.942823    7752 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 08:03:18.942823    7752 kubeadm.go:319] 
	I1205 08:03:18.942823    7752 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 08:03:18.942823    7752 kubeadm.go:319] 
	I1205 08:03:18.942823    7752 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 08:03:18.943808    7752 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 08:03:18.943808    7752 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 08:03:18.943808    7752 kubeadm.go:319] 
	I1205 08:03:18.943808    7752 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 08:03:18.943808    7752 kubeadm.go:319] 
	I1205 08:03:18.943808    7752 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 08:03:18.943808    7752 kubeadm.go:319] 
	I1205 08:03:18.943808    7752 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 08:03:18.943808    7752 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 08:03:18.944806    7752 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 08:03:18.944806    7752 kubeadm.go:319] 
	I1205 08:03:18.944806    7752 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 08:03:18.944806    7752 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 08:03:18.944806    7752 kubeadm.go:319] 
	I1205 08:03:18.944806    7752 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1cxax7.ns6btejt7ml94kfn \
	I1205 08:03:18.945817    7752 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e \
	I1205 08:03:18.945817    7752 kubeadm.go:319] 	--control-plane 
	I1205 08:03:18.945817    7752 kubeadm.go:319] 
	I1205 08:03:18.945817    7752 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 08:03:18.945817    7752 kubeadm.go:319] 
	I1205 08:03:18.945817    7752 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1cxax7.ns6btejt7ml94kfn \
	I1205 08:03:18.945817    7752 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:357aea705a850b8655a3b0758990f5403e6ec7ce3ec2d0f4c60e6f0ad5f05e6e 
	I1205 08:03:18.945817    7752 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1205 08:03:18.945817    7752 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 08:03:18.952816    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:18.953821    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-218000 minikube.k8s.io/updated_at=2025_12_05T08_03_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=kubenet-218000 minikube.k8s.io/primary=true
	I1205 08:03:18.964812    7752 ops.go:34] apiserver oom_adj: -16
	I1205 08:03:19.153727    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:19.654234    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:20.153978    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:20.654175    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:21.155492    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:21.655499    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:22.154374    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:22.653568    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:23.153808    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:23.655317    7752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:03:23.759248    7752 kubeadm.go:1114] duration metric: took 4.8132669s to wait for elevateKubeSystemPrivileges
	I1205 08:03:23.759332    7752 kubeadm.go:403] duration metric: took 18.7746482s to StartCluster
	I1205 08:03:23.759385    7752 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:23.759615    7752 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:23.762319    7752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:03:23.763782    7752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 08:03:23.764458    7752 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:03:23.764458    7752 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:03:23.764685    7752 addons.go:70] Setting storage-provisioner=true in profile "kubenet-218000"
	I1205 08:03:23.764757    7752 addons.go:239] Setting addon storage-provisioner=true in "kubenet-218000"
	I1205 08:03:23.764757    7752 addons.go:70] Setting default-storageclass=true in profile "kubenet-218000"
	I1205 08:03:23.764757    7752 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-218000"
	I1205 08:03:23.764879    7752 host.go:66] Checking if "kubenet-218000" exists ...
	I1205 08:03:23.764989    7752 config.go:182] Loaded profile config "kubenet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 08:03:23.767750    7752 out.go:179] * Verifying Kubernetes components...
	I1205 08:03:23.774818    7752 cli_runner.go:164] Run: docker container inspect kubenet-218000 --format={{.State.Status}}
	I1205 08:03:23.775702    7752 cli_runner.go:164] Run: docker container inspect kubenet-218000 --format={{.State.Status}}
	I1205 08:03:23.779145    7752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:23.844525    7752 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:03:20.961008    4412 addons.go:530] duration metric: took 2.2570663s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 08:03:21.024856    4412 default_sa.go:45] found service account: "default"
	I1205 08:03:21.024856    4412 default_sa.go:55] duration metric: took 77.4798ms for default service account to be created ...
	I1205 08:03:21.024856    4412 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 08:03:21.035991    4412 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:21.036077    4412 system_pods.go:89] "coredns-66bc5c9577-jnqxb" [8a2acc63-c9a3-4438-a037-7d3f180b1ca1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:21.036077    4412 system_pods.go:89] "coredns-66bc5c9577-zrgxp" [f5b4e994-7931-429c-816f-146480fba04d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:21.036155    4412 system_pods.go:89] "etcd-bridge-218000" [8f3a7f79-834e-43dd-b20d-ab638640c777] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 08:03:21.036155    4412 system_pods.go:89] "kube-apiserver-bridge-218000" [ce57bde4-8c12-43e5-b08b-e7fec39d606c] Running
	I1205 08:03:21.036155    4412 system_pods.go:89] "kube-controller-manager-bridge-218000" [8c6b84bc-04f7-47c7-8a40-31913808a09a] Running
	I1205 08:03:21.036155    4412 system_pods.go:89] "kube-proxy-8r4gs" [622d5d99-1ddc-40af-9457-a2d8381c3055] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:21.036155    4412 system_pods.go:89] "kube-scheduler-bridge-218000" [b1342bf7-8218-4837-916a-cf8d004103b0] Running
	I1205 08:03:21.036155    4412 system_pods.go:89] "storage-provisioner" [8b996d75-f7c6-407d-ad41-c38a7c9d079d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:21.036250    4412 retry.go:31] will retry after 309.274012ms: missing components: kube-dns, kube-proxy
	I1205 08:03:21.425318    4412 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:21.425318    4412 system_pods.go:89] "coredns-66bc5c9577-jnqxb" [8a2acc63-c9a3-4438-a037-7d3f180b1ca1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:21.425318    4412 system_pods.go:89] "coredns-66bc5c9577-zrgxp" [f5b4e994-7931-429c-816f-146480fba04d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:21.425318    4412 system_pods.go:89] "etcd-bridge-218000" [8f3a7f79-834e-43dd-b20d-ab638640c777] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 08:03:21.425318    4412 system_pods.go:89] "kube-apiserver-bridge-218000" [ce57bde4-8c12-43e5-b08b-e7fec39d606c] Running
	I1205 08:03:21.425318    4412 system_pods.go:89] "kube-controller-manager-bridge-218000" [8c6b84bc-04f7-47c7-8a40-31913808a09a] Running
	I1205 08:03:21.425318    4412 system_pods.go:89] "kube-proxy-8r4gs" [622d5d99-1ddc-40af-9457-a2d8381c3055] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:21.425318    4412 system_pods.go:89] "kube-scheduler-bridge-218000" [b1342bf7-8218-4837-916a-cf8d004103b0] Running
	I1205 08:03:21.425318    4412 system_pods.go:89] "storage-provisioner" [8b996d75-f7c6-407d-ad41-c38a7c9d079d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:21.425318    4412 retry.go:31] will retry after 375.37242ms: missing components: kube-dns, kube-proxy
	I1205 08:03:21.827820    4412 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:21.827865    4412 system_pods.go:89] "coredns-66bc5c9577-jnqxb" [8a2acc63-c9a3-4438-a037-7d3f180b1ca1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:21.827865    4412 system_pods.go:89] "coredns-66bc5c9577-zrgxp" [f5b4e994-7931-429c-816f-146480fba04d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:21.827958    4412 system_pods.go:89] "etcd-bridge-218000" [8f3a7f79-834e-43dd-b20d-ab638640c777] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 08:03:21.827958    4412 system_pods.go:89] "kube-apiserver-bridge-218000" [ce57bde4-8c12-43e5-b08b-e7fec39d606c] Running
	I1205 08:03:21.827958    4412 system_pods.go:89] "kube-controller-manager-bridge-218000" [8c6b84bc-04f7-47c7-8a40-31913808a09a] Running
	I1205 08:03:21.827958    4412 system_pods.go:89] "kube-proxy-8r4gs" [622d5d99-1ddc-40af-9457-a2d8381c3055] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:21.827958    4412 system_pods.go:89] "kube-scheduler-bridge-218000" [b1342bf7-8218-4837-916a-cf8d004103b0] Running
	I1205 08:03:21.827958    4412 system_pods.go:89] "storage-provisioner" [8b996d75-f7c6-407d-ad41-c38a7c9d079d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:21.827958    4412 retry.go:31] will retry after 308.410296ms: missing components: kube-proxy
	I1205 08:03:22.143615    4412 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:22.143695    4412 system_pods.go:89] "coredns-66bc5c9577-jnqxb" [8a2acc63-c9a3-4438-a037-7d3f180b1ca1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:22.143715    4412 system_pods.go:89] "coredns-66bc5c9577-zrgxp" [f5b4e994-7931-429c-816f-146480fba04d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:22.143747    4412 system_pods.go:89] "etcd-bridge-218000" [8f3a7f79-834e-43dd-b20d-ab638640c777] Running
	I1205 08:03:22.143747    4412 system_pods.go:89] "kube-apiserver-bridge-218000" [ce57bde4-8c12-43e5-b08b-e7fec39d606c] Running
	I1205 08:03:22.143747    4412 system_pods.go:89] "kube-controller-manager-bridge-218000" [8c6b84bc-04f7-47c7-8a40-31913808a09a] Running
	I1205 08:03:22.143776    4412 system_pods.go:89] "kube-proxy-8r4gs" [622d5d99-1ddc-40af-9457-a2d8381c3055] Running
	I1205 08:03:22.143796    4412 system_pods.go:89] "kube-scheduler-bridge-218000" [b1342bf7-8218-4837-916a-cf8d004103b0] Running
	I1205 08:03:22.143796    4412 system_pods.go:89] "storage-provisioner" [8b996d75-f7c6-407d-ad41-c38a7c9d079d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:22.143828    4412 system_pods.go:126] duration metric: took 1.1189542s to wait for k8s-apps to be running ...
	I1205 08:03:22.143828    4412 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 08:03:22.148066    4412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 08:03:22.171489    4412 system_svc.go:56] duration metric: took 27.6599ms WaitForService to wait for kubelet
	I1205 08:03:22.171489    4412 kubeadm.go:587] duration metric: took 3.467006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:03:22.171489    4412 node_conditions.go:102] verifying NodePressure condition ...
	I1205 08:03:22.178201    4412 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1205 08:03:22.178261    4412 node_conditions.go:123] node cpu capacity is 16
	I1205 08:03:22.178315    4412 node_conditions.go:105] duration metric: took 6.8261ms to run NodePressure ...
	I1205 08:03:22.178358    4412 start.go:242] waiting for startup goroutines ...
	I1205 08:03:22.178442    4412 start.go:247] waiting for cluster config update ...
	I1205 08:03:22.178442    4412 start.go:256] writing updated cluster config ...
	I1205 08:03:22.185160    4412 ssh_runner.go:195] Run: rm -f paused
	I1205 08:03:22.198665    4412 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:22.227261    4412 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jnqxb" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 08:03:24.242514    4412 pod_ready.go:104] pod "coredns-66bc5c9577-jnqxb" is not "Ready", error: <nil>
	I1205 08:03:23.844525    7752 addons.go:239] Setting addon default-storageclass=true in "kubenet-218000"
	I1205 08:03:23.844525    7752 host.go:66] Checking if "kubenet-218000" exists ...
	I1205 08:03:23.848351    7752 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:03:23.848351    7752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:03:23.854436    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:03:23.855030    7752 cli_runner.go:164] Run: docker container inspect kubenet-218000 --format={{.State.Status}}
	I1205 08:03:23.916326    7752 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:03:23.916326    7752 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:03:23.918333    7752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62580 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa Username:docker}
	I1205 08:03:23.919335    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:03:23.982987    7752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62580 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-218000\id_rsa Username:docker}
	I1205 08:03:24.433009    7752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 08:03:24.433166    7752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:03:24.438251    7752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:03:24.535013    7752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:03:25.469869    7752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036625s)
	I1205 08:03:25.469869    7752 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.036788s)
	I1205 08:03:25.469904    7752 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.0316366s)
	I1205 08:03:25.469904    7752 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1205 08:03:25.474033    7752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-218000
	I1205 08:03:25.529035    7752 node_ready.go:35] waiting up to 15m0s for node "kubenet-218000" to be "Ready" ...
	I1205 08:03:25.540044    7752 node_ready.go:49] node "kubenet-218000" is "Ready"
	I1205 08:03:25.540044    7752 node_ready.go:38] duration metric: took 11.0087ms for node "kubenet-218000" to be "Ready" ...
	I1205 08:03:25.540044    7752 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:03:25.546057    7752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:03:25.550054    7752 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 08:03:25.552043    7752 addons.go:530] duration metric: took 1.7875568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 08:03:25.647065    7752 api_server.go:72] duration metric: took 1.8825256s to wait for apiserver process to appear ...
	I1205 08:03:25.647065    7752 api_server.go:88] waiting for apiserver healthz status ...
	I1205 08:03:25.647065    7752 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62579/healthz ...
	I1205 08:03:25.755844    7752 api_server.go:279] https://127.0.0.1:62579/healthz returned 200:
	ok
	I1205 08:03:25.758469    7752 api_server.go:141] control plane version: v1.34.2
	I1205 08:03:25.759044    7752 api_server.go:131] duration metric: took 111.9773ms to wait for apiserver health ...
	I1205 08:03:25.759044    7752 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 08:03:25.767783    7752 system_pods.go:59] 8 kube-system pods found
	I1205 08:03:25.767783    7752 system_pods.go:61] "coredns-66bc5c9577-gkt9w" [769367f1-dbca-4418-8d5b-15a2b63af195] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:25.767783    7752 system_pods.go:61] "coredns-66bc5c9577-gsfxl" [366912f2-5b0b-4a3e-bf32-f645e22ff075] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:25.767783    7752 system_pods.go:61] "etcd-kubenet-218000" [0e880f04-5bb0-4892-be87-973ce2eb08cb] Running
	I1205 08:03:25.767783    7752 system_pods.go:61] "kube-apiserver-kubenet-218000" [3b746f93-2c55-4e61-aeb2-d3d1ab08a063] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:03:25.767783    7752 system_pods.go:61] "kube-controller-manager-kubenet-218000" [b22d0f23-1433-42f1-8c7c-1505773dae1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:03:25.767783    7752 system_pods.go:61] "kube-proxy-l9mnz" [4a76fe95-4433-415e-a55b-b8452647e10b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:25.767783    7752 system_pods.go:61] "kube-scheduler-kubenet-218000" [f29cc984-6b08-4628-adcd-85ac2865c78f] Running
	I1205 08:03:25.767783    7752 system_pods.go:61] "storage-provisioner" [0fb20aa6-7936-4e9c-9463-766646f0ff19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:25.767783    7752 system_pods.go:74] duration metric: took 8.7388ms to wait for pod list to return data ...
	I1205 08:03:25.767783    7752 default_sa.go:34] waiting for default service account to be created ...
	I1205 08:03:25.825682    7752 default_sa.go:45] found service account: "default"
	I1205 08:03:25.826220    7752 default_sa.go:55] duration metric: took 58.4363ms for default service account to be created ...
	I1205 08:03:25.826220    7752 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 08:03:25.927972    7752 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:25.927972    7752 system_pods.go:89] "coredns-66bc5c9577-gkt9w" [769367f1-dbca-4418-8d5b-15a2b63af195] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:25.927972    7752 system_pods.go:89] "coredns-66bc5c9577-gsfxl" [366912f2-5b0b-4a3e-bf32-f645e22ff075] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:25.927972    7752 system_pods.go:89] "etcd-kubenet-218000" [0e880f04-5bb0-4892-be87-973ce2eb08cb] Running
	I1205 08:03:25.927972    7752 system_pods.go:89] "kube-apiserver-kubenet-218000" [3b746f93-2c55-4e61-aeb2-d3d1ab08a063] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:03:25.927972    7752 system_pods.go:89] "kube-controller-manager-kubenet-218000" [b22d0f23-1433-42f1-8c7c-1505773dae1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:03:25.927972    7752 system_pods.go:89] "kube-proxy-l9mnz" [4a76fe95-4433-415e-a55b-b8452647e10b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:25.927972    7752 system_pods.go:89] "kube-scheduler-kubenet-218000" [f29cc984-6b08-4628-adcd-85ac2865c78f] Running
	I1205 08:03:25.927972    7752 system_pods.go:89] "storage-provisioner" [0fb20aa6-7936-4e9c-9463-766646f0ff19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:25.927972    7752 retry.go:31] will retry after 281.995327ms: missing components: kube-dns, kube-proxy
	I1205 08:03:25.986978    7752 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-218000" context rescaled to 1 replicas
	I1205 08:03:26.217438    7752 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:26.217531    7752 system_pods.go:89] "coredns-66bc5c9577-gkt9w" [769367f1-dbca-4418-8d5b-15a2b63af195] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:26.217531    7752 system_pods.go:89] "coredns-66bc5c9577-gsfxl" [366912f2-5b0b-4a3e-bf32-f645e22ff075] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:26.217567    7752 system_pods.go:89] "etcd-kubenet-218000" [0e880f04-5bb0-4892-be87-973ce2eb08cb] Running
	I1205 08:03:26.217585    7752 system_pods.go:89] "kube-apiserver-kubenet-218000" [3b746f93-2c55-4e61-aeb2-d3d1ab08a063] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:03:26.217585    7752 system_pods.go:89] "kube-controller-manager-kubenet-218000" [b22d0f23-1433-42f1-8c7c-1505773dae1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:03:26.217637    7752 system_pods.go:89] "kube-proxy-l9mnz" [4a76fe95-4433-415e-a55b-b8452647e10b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:26.217637    7752 system_pods.go:89] "kube-scheduler-kubenet-218000" [f29cc984-6b08-4628-adcd-85ac2865c78f] Running
	I1205 08:03:26.217672    7752 system_pods.go:89] "storage-provisioner" [0fb20aa6-7936-4e9c-9463-766646f0ff19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:26.217704    7752 retry.go:31] will retry after 242.168475ms: missing components: kube-dns, kube-proxy
	I1205 08:03:26.467998    7752 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:26.467998    7752 system_pods.go:89] "coredns-66bc5c9577-gkt9w" [769367f1-dbca-4418-8d5b-15a2b63af195] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:26.467998    7752 system_pods.go:89] "coredns-66bc5c9577-gsfxl" [366912f2-5b0b-4a3e-bf32-f645e22ff075] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:26.467998    7752 system_pods.go:89] "etcd-kubenet-218000" [0e880f04-5bb0-4892-be87-973ce2eb08cb] Running
	I1205 08:03:26.467998    7752 system_pods.go:89] "kube-apiserver-kubenet-218000" [3b746f93-2c55-4e61-aeb2-d3d1ab08a063] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:03:26.467998    7752 system_pods.go:89] "kube-controller-manager-kubenet-218000" [b22d0f23-1433-42f1-8c7c-1505773dae1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:03:26.467998    7752 system_pods.go:89] "kube-proxy-l9mnz" [4a76fe95-4433-415e-a55b-b8452647e10b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:26.467998    7752 system_pods.go:89] "kube-scheduler-kubenet-218000" [f29cc984-6b08-4628-adcd-85ac2865c78f] Running
	I1205 08:03:26.467998    7752 system_pods.go:89] "storage-provisioner" [0fb20aa6-7936-4e9c-9463-766646f0ff19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:26.467998    7752 retry.go:31] will retry after 362.018168ms: missing components: kube-dns, kube-proxy
	I1205 08:03:26.837493    7752 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:26.837558    7752 system_pods.go:89] "coredns-66bc5c9577-gkt9w" [769367f1-dbca-4418-8d5b-15a2b63af195] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:26.837558    7752 system_pods.go:89] "coredns-66bc5c9577-gsfxl" [366912f2-5b0b-4a3e-bf32-f645e22ff075] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:26.837612    7752 system_pods.go:89] "etcd-kubenet-218000" [0e880f04-5bb0-4892-be87-973ce2eb08cb] Running
	I1205 08:03:26.837612    7752 system_pods.go:89] "kube-apiserver-kubenet-218000" [3b746f93-2c55-4e61-aeb2-d3d1ab08a063] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:03:26.837612    7752 system_pods.go:89] "kube-controller-manager-kubenet-218000" [b22d0f23-1433-42f1-8c7c-1505773dae1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:03:26.837662    7752 system_pods.go:89] "kube-proxy-l9mnz" [4a76fe95-4433-415e-a55b-b8452647e10b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:03:26.837706    7752 system_pods.go:89] "kube-scheduler-kubenet-218000" [f29cc984-6b08-4628-adcd-85ac2865c78f] Running
	I1205 08:03:26.837706    7752 system_pods.go:89] "storage-provisioner" [0fb20aa6-7936-4e9c-9463-766646f0ff19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:03:26.837744    7752 retry.go:31] will retry after 601.873592ms: missing components: kube-dns, kube-proxy
	I1205 08:03:27.447137    7752 system_pods.go:86] 8 kube-system pods found
	I1205 08:03:27.447205    7752 system_pods.go:89] "coredns-66bc5c9577-gkt9w" [769367f1-dbca-4418-8d5b-15a2b63af195] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:27.447205    7752 system_pods.go:89] "coredns-66bc5c9577-gsfxl" [366912f2-5b0b-4a3e-bf32-f645e22ff075] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:03:27.447205    7752 system_pods.go:89] "etcd-kubenet-218000" [0e880f04-5bb0-4892-be87-973ce2eb08cb] Running
	I1205 08:03:27.447205    7752 system_pods.go:89] "kube-apiserver-kubenet-218000" [3b746f93-2c55-4e61-aeb2-d3d1ab08a063] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:03:27.447205    7752 system_pods.go:89] "kube-controller-manager-kubenet-218000" [b22d0f23-1433-42f1-8c7c-1505773dae1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:03:27.447205    7752 system_pods.go:89] "kube-proxy-l9mnz" [4a76fe95-4433-415e-a55b-b8452647e10b] Running
	I1205 08:03:27.447284    7752 system_pods.go:89] "kube-scheduler-kubenet-218000" [f29cc984-6b08-4628-adcd-85ac2865c78f] Running
	I1205 08:03:27.447320    7752 system_pods.go:89] "storage-provisioner" [0fb20aa6-7936-4e9c-9463-766646f0ff19] Running
	I1205 08:03:27.447320    7752 system_pods.go:126] duration metric: took 1.6210736s to wait for k8s-apps to be running ...
	I1205 08:03:27.447320    7752 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 08:03:27.451888    7752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 08:03:27.470821    7752 system_svc.go:56] duration metric: took 23.4705ms WaitForService to wait for kubelet
	I1205 08:03:27.470858    7752 kubeadm.go:587] duration metric: took 3.7062899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:03:27.470858    7752 node_conditions.go:102] verifying NodePressure condition ...
	I1205 08:03:27.475844    7752 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1205 08:03:27.476369    7752 node_conditions.go:123] node cpu capacity is 16
	I1205 08:03:27.476369    7752 node_conditions.go:105] duration metric: took 5.5105ms to run NodePressure ...
	I1205 08:03:27.476369    7752 start.go:242] waiting for startup goroutines ...
	I1205 08:03:27.476369    7752 start.go:247] waiting for cluster config update ...
	I1205 08:03:27.476453    7752 start.go:256] writing updated cluster config ...
	I1205 08:03:27.481690    7752 ssh_runner.go:195] Run: rm -f paused
	I1205 08:03:27.488784    7752 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:27.494099    7752 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkt9w" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 08:03:26.741885    4412 pod_ready.go:104] pod "coredns-66bc5c9577-jnqxb" is not "Ready", error: <nil>
	W1205 08:03:29.237682    4412 pod_ready.go:104] pod "coredns-66bc5c9577-jnqxb" is not "Ready", error: <nil>
	W1205 08:03:30.179859    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:29.506757    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gkt9w" is not "Ready", error: <nil>
	W1205 08:03:32.006590    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gkt9w" is not "Ready", error: <nil>
	W1205 08:03:31.238986    4412 pod_ready.go:104] pod "coredns-66bc5c9577-jnqxb" is not "Ready", error: <nil>
	I1205 08:03:32.233416    4412 pod_ready.go:99] pod "coredns-66bc5c9577-jnqxb" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-jnqxb" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-jnqxb" not found
	I1205 08:03:32.233480    4412 pod_ready.go:86] duration metric: took 10.0059348s for pod "coredns-66bc5c9577-jnqxb" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:32.233480    4412 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zrgxp" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 08:03:34.245417    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:34.505773    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gkt9w" is not "Ready", error: <nil>
	W1205 08:03:36.506308    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gkt9w" is not "Ready", error: <nil>
	I1205 08:03:38.000036    7752 pod_ready.go:99] pod "coredns-66bc5c9577-gkt9w" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-gkt9w" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-gkt9w" not found
	I1205 08:03:38.000137    7752 pod_ready.go:86] duration metric: took 10.5058709s for pod "coredns-66bc5c9577-gkt9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:38.000137    7752 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gsfxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 08:03:36.747080    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:39.245726    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:40.215563    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:40.010661    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:42.018963    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	
	
	==> Docker <==
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735621442Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735810362Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735822264Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735827264Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.735874969Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.736046888Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.736251309Z" level=info msg="Initializing buildkit"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.916830207Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926605346Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926832270Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926915179Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:53:09 newest-cni-042100 dockerd[1174]: time="2025-12-05T07:53:09.926837171Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:53:09 newest-cni-042100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:53:10 newest-cni-042100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:53:10 newest-cni-042100 cri-dockerd[1467]: time="2025-12-05T07:53:10Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:53:10 newest-cni-042100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:03:44.391553   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:03:44.393218   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:03:44.394344   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:03:44.395288   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:03:44.397005   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.405377] CPU: 9 PID: 459842 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f6348cb1b20
	[  +0.000039] Code: Unable to access opcode bytes at RIP 0x7f6348cb1af6.
	[  +0.000002] RSP: 002b:00007ffd704a8b40 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000029] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.855222] CPU: 10 PID: 460005 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f0cb9d8db20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f0cb9d8daf6.
	[  +0.000001] RSP: 002b:00007fff88202d50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec 5 08:03] tmpfs: Unknown parameter 'noswap'
	[  +5.503156] tmpfs: Unknown parameter 'noswap'
	[  +3.613909] tmpfs: Unknown parameter 'noswap'
	[  +5.225218] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 08:03:44 up  3:37,  0 user,  load average: 4.66, 4.85, 4.28
	Linux newest-cni-042100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:03:41 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:03:41 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 05 08:03:41 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:41 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:42 newest-cni-042100 kubelet[13210]: E1205 08:03:42.087636   13210 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:03:42 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:03:42 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:03:42 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 05 08:03:42 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:42 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:42 newest-cni-042100 kubelet[13236]: E1205 08:03:42.848746   13236 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:03:42 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:03:42 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:03:43 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 05 08:03:43 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:43 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:43 newest-cni-042100 kubelet[13264]: E1205 08:03:43.592463   13264 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:03:43 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:03:43 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:03:44 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 05 08:03:44 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:44 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:03:44 newest-cni-042100 kubelet[13363]: E1205 08:03:44.343741   13363 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:03:44 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:03:44 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100: exit status 6 (623.7021ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 08:03:45.500664   10544 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-042100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (115.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (384.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-042100 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-042100 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m17.8689593s)

                                                
                                                
-- stdout --
	* [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 08:03:48.079593    6576 out.go:360] Setting OutFile to fd 1628 ...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.133685    6576 out.go:374] Setting ErrFile to fd 1512...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.149881    6576 out.go:368] Setting JSON to false
	I1205 08:03:48.152825    6576 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13085,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:03:48.152825    6576 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:03:48.159945    6576 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:03:48.164658    6576 notify.go:221] Checking for updates...
	I1205 08:03:48.167308    6576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:48.170547    6576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:03:48.173264    6576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:03:48.177277    6576 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:03:48.179134    6576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:03:48.182963    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:48.184223    6576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:03:48.306826    6576 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:03:48.310816    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.562528    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.540004205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.565521    6576 out.go:179] * Using the docker driver based on existing profile
	I1205 08:03:48.568528    6576 start.go:309] selected driver: docker
	I1205 08:03:48.568528    6576 start.go:927] validating driver "docker" against &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.568528    6576 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:03:48.621627    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.870676    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.852383077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.870676    6576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 08:03:48.870676    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:03:48.871676    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:03:48.871676    6576 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.874674    6576 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 08:03:48.876674    6576 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:03:48.879674    6576 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:03:48.881674    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:03:48.881674    6576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 08:03:48.924123    6576 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:48.965045    6576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:03:48.965045    6576 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 08:03:49.173795    6576 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:49.174041    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 08:03:49.174210    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 08:03:49.176070    6576 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:03:49.176070    6576 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:49.176610    6576 start.go:364] duration metric: took 539.2µs to acquireMachinesLock for "newest-cni-042100"
	I1205 08:03:49.176876    6576 start.go:96] Skipping create...Using existing machine configuration
	I1205 08:03:49.176954    6576 fix.go:54] fixHost starting: 
	I1205 08:03:49.185185    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:49.467905    6576 fix.go:112] recreateIfNeeded on newest-cni-042100: state=Stopped err=<nil>
	W1205 08:03:49.468085    6576 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 08:03:49.492567    6576 out.go:252] * Restarting existing docker container for "newest-cni-042100" ...
	I1205 08:03:49.497575    6576 cli_runner.go:164] Run: docker start newest-cni-042100
	I1205 08:03:50.779131    6576 cli_runner.go:217] Completed: docker start newest-cni-042100: (1.2815354s)
	I1205 08:03:50.788112    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:51.139299    6576 kic.go:430] container "newest-cni-042100" state is running.
	I1205 08:03:51.164376    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:51.273747    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:51.276892    6576 machine.go:94] provisionDockerMachine start ...
	I1205 08:03:51.284394    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:51.396042    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:51.397040    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:51.397040    6576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:03:51.400042    6576 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:03:52.385305    6576 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.385658    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 08:03:52.385720    6576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.211458s
	I1205 08:03:52.385800    6576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 08:03:52.435659    6576 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.435659    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 08:03:52.435659    6576 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2613971s
	I1205 08:03:52.435659    6576 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 08:03:52.467883    6576 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.468216    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 08:03:52.468216    6576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.2939732s
	I1205 08:03:52.468216    6576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 08:03:52.472465    6576 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.472465    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 08:03:52.472465    6576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2982024s
	I1205 08:03:52.472465    6576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 08:03:52.472991    6576 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.473088    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 08:03:52.473088    6576 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2988253s
	I1205 08:03:52.473088    6576 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 08:03:52.478918    6576 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.479537    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 08:03:52.479537    6576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3052743s
	I1205 08:03:52.479537    6576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 08:03:52.488107    6576 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.489284    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 08:03:52.489284    6576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.3150206s
	I1205 08:03:52.489284    6576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 08:03:52.587256    6576 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.588098    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 08:03:52.588098    6576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.413907s
	I1205 08:03:52.588098    6576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 08:03:52.588098    6576 cache.go:87] Successfully saved all images to host disk.
	I1205 08:03:54.578463    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.578463    6576 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 08:03:54.583153    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.645702    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.646148    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.646193    6576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 08:03:54.866524    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.872867    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.933417    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.934199    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.934272    6576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:03:55.129977    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:55.129977    6576 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:03:55.129977    6576 ubuntu.go:190] setting up certificates
	I1205 08:03:55.129977    6576 provision.go:84] configureAuth start
	I1205 08:03:55.133735    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:55.190185    6576 provision.go:143] copyHostCerts
	I1205 08:03:55.190185    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:03:55.190185    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:03:55.190984    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:03:55.191986    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:03:55.191986    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:03:55.192251    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:03:55.193178    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:03:55.193178    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:03:55.193462    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:03:55.194234    6576 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 08:03:55.277216    6576 provision.go:177] copyRemoteCerts
	I1205 08:03:55.282373    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:03:55.285821    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.350220    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:55.476652    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:03:55.511250    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:03:55.546706    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 08:03:55.583614    6576 provision.go:87] duration metric: took 453.6304ms to configureAuth
	I1205 08:03:55.583614    6576 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:03:55.585275    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:55.589206    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.651189    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.652212    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.652246    6576 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:03:55.836329    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:03:55.837449    6576 ubuntu.go:71] root file system type: overlay
	I1205 08:03:55.837646    6576 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:03:55.841558    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.910453    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.911069    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.911069    6576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:03:56.123635    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:03:56.128031    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.191540    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:56.191765    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:56.191765    6576 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:03:56.396364    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:56.396364    6576 machine.go:97] duration metric: took 5.1193899s to provisionDockerMachine
	I1205 08:03:56.396364    6576 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 08:03:56.396897    6576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:03:56.402233    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:03:56.406223    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.460168    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.609105    6576 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:03:56.617925    6576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:03:56.617925    6576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:03:56.618732    6576 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:03:56.623542    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:03:56.637899    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:03:56.671787    6576 start.go:296] duration metric: took 274.8468ms for postStartSetup
	I1205 08:03:56.675921    6576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:03:56.678948    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.735289    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.884826    6576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:03:56.893835    6576 fix.go:56] duration metric: took 7.7168367s for fixHost
	I1205 08:03:56.893835    6576 start.go:83] releasing machines lock for "newest-cni-042100", held for 7.7169474s
	I1205 08:03:56.896826    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:56.959384    6576 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:03:56.965413    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.966255    6576 ssh_runner.go:195] Run: cat /version.json
	I1205 08:03:56.973872    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:57.022198    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:57.026201    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	W1205 08:03:57.148711    6576 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:03:57.162212    6576 ssh_runner.go:195] Run: systemctl --version
	I1205 08:03:57.181097    6576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:03:57.193288    6576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:03:57.197753    6576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:03:57.214357    6576 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 08:03:57.214357    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.214357    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.214357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:57.242461    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:03:57.262818    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 08:03:57.264705    6576 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:03:57.264749    6576 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:03:57.282712    6576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:03:57.286891    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:57.310466    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.333091    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:57.356105    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.377603    6576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:57.401090    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:57.423330    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:57.445407    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:57.472206    6576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:57.488210    6576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:57.505210    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:57.657790    6576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:57.802417    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.802417    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.807146    6576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:57.832467    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.857712    6576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:57.930272    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.960276    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:57.984286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:58.017277    6576 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:58.032288    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:58.048281    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:03:58.077282    6576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:58.275290    6576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:58.457293    6576 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:58.457293    6576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:58.486286    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:58.509287    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:58.648318    6576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:04:00.173930    6576 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5255881s)
	I1205 08:04:00.177929    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:04:00.201541    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:04:00.228851    6576 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 08:04:00.259044    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:00.283032    6576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:04:00.429299    6576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:04:00.593446    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.738544    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:04:00.766865    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:04:00.791407    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.930315    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:04:01.041317    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:01.059628    6576 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:04:01.064630    6576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:04:01.072635    6576 start.go:564] Will wait 60s for crictl version
	I1205 08:04:01.076636    6576 ssh_runner.go:195] Run: which crictl
	I1205 08:04:01.090615    6576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:04:01.132099    6576 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:04:01.136068    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.182106    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.227459    6576 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 08:04:01.231071    6576 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 08:04:01.375969    6576 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:04:01.379962    6576 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:04:01.387350    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.408320    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:01.468320    6576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 08:04:01.471323    6576 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:04:01.471323    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:04:01.475324    6576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:04:01.511342    6576 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:04:01.512362    6576 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:04:01.512362    6576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 08:04:01.512362    6576 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:04:01.515327    6576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:04:01.600646    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:04:01.600646    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:04:01.600646    6576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 08:04:01.600646    6576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:04:01.600646    6576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:04:01.604645    6576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 08:04:01.617663    6576 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:04:01.621646    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:04:01.634708    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 08:04:01.659457    6576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 08:04:01.681516    6576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 08:04:01.709549    6576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:04:01.717165    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.737936    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:01.886462    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:01.908845    6576 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 08:04:01.908845    6576 certs.go:195] generating shared ca certs ...
	I1205 08:04:01.908845    6576 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:01.910250    6576 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:04:01.910428    6576 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:04:01.910428    6576 certs.go:257] generating profile certs ...
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 08:04:01.911645    6576 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 08:04:01.912393    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:04:01.912708    6576 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:04:01.912818    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:04:01.913766    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:04:01.914884    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:04:01.946745    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:04:01.978670    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:04:02.020771    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:04:02.052789    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 08:04:02.083785    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:04:02.111686    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:04:02.138106    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:04:02.167957    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:04:02.197699    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:04:02.228974    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:04:02.258542    6576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:04:02.283541    6576 ssh_runner.go:195] Run: openssl version
	I1205 08:04:02.296537    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.312534    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:04:02.327543    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.334539    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.339544    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.392223    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:04:02.408977    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.424981    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:04:02.439981    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.446982    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.451985    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.500175    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:04:02.518368    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.537597    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:04:02.555653    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.562656    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.566659    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.617005    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:04:02.635329    6576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:04:02.649383    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 08:04:02.697863    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 08:04:02.747535    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 08:04:02.802236    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 08:04:02.853222    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 08:04:02.901642    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 08:04:02.946962    6576 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:04:02.951256    6576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:04:02.986478    6576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:04:02.999955    6576 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 08:04:02.999955    6576 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 08:04:03.003999    6576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 08:04:03.019291    6576 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 08:04:03.022819    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.083372    6576 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.084185    6576 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042100" cluster setting kubeconfig missing "newest-cni-042100" context setting]
	I1205 08:04:03.084741    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.109144    6576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 08:04:03.128232    6576 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 08:04:03.138905    6576 kubeadm.go:602] duration metric: took 138.9481ms to restartPrimaryControlPlane
	I1205 08:04:03.138905    6576 kubeadm.go:403] duration metric: took 191.9404ms to StartCluster
	I1205 08:04:03.138905    6576 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.138905    6576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.141698    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.142419    6576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:04:03.142419    6576 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:04:03.142419    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:04:03.163290    6576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting dashboard=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon dashboard=true in "newest-cni-042100"
	W1205 08:04:03.163290    6576 addons.go:248] addon dashboard should already be in state true
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.192363    6576 out.go:179] * Verifying Kubernetes components...
	I1205 08:04:03.249622    6576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-042100"
	I1205 08:04:03.250609    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.257607    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.258609    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:03.261608    6576 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 08:04:03.264610    6576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:04:03.309607    6576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.309607    6576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:04:03.312609    6576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.312609    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:04:03.312609    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.315610    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.318607    6576 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 08:04:03.344609    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 08:04:03.344609    6576 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 08:04:03.353008    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.373762    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.389748    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.415749    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.454747    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:03.481745    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.544756    6576 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:04:03.550761    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:03.552751    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.556766    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 08:04:03.556766    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 08:04:03.561743    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.627813    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 08:04:03.627923    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 08:04:03.654463    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 08:04:03.654463    6576 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 08:04:03.731575    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 08:04:03.731654    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1205 08:04:03.751356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.751356    6576 retry.go:31] will retry after 148.467646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.754346    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	W1205 08:04:03.755354    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.755354    6576 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 08:04:03.755354    6576 retry.go:31] will retry after 202.130528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.774491    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 08:04:03.774491    6576 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 08:04:03.793803    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 08:04:03.793803    6576 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 08:04:03.828295    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 08:04:03.828351    6576 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 08:04:03.851355    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.851355    6576 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 08:04:03.876402    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.905217    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:03.957742    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.957742    6576 retry.go:31] will retry after 291.655688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.962256    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:03.992521    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.992521    6576 retry.go:31] will retry after 561.792628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.049441    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:04.057481    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.057556    6576 retry.go:31] will retry after 288.112081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.254701    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.343216    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.343216    6576 retry.go:31] will retry after 359.979776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.350062    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:04.431174    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.431174    6576 retry.go:31] will retry after 483.679942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.549772    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:04.559147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:04.642871    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.642871    6576 retry.go:31] will retry after 528.970083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.708123    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.787283    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.787283    6576 retry.go:31] will retry after 459.684582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.919229    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.004707    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.004707    6576 retry.go:31] will retry after 831.823948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.050298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.177969    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:05.252148    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:05.268807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.268914    6576 retry.go:31] will retry after 1.219301827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:05.381615    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.381684    6576 retry.go:31] will retry after 1.003502336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.548840    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.841493    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.945714    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.945714    6576 retry.go:31] will retry after 1.344373684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.051495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:06.390219    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:06.476859    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.476859    6576 retry.go:31] will retry after 916.677354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.493513    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:06.550586    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:06.586142    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.586142    6576 retry.go:31] will retry after 814.667109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.049968    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:07.295279    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:07.385161    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.385225    6576 retry.go:31] will retry after 2.309719888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.397737    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:07.404241    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.24760459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.229405263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.550637    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.050329    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.551330    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.052416    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.549628    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.699045    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:09.722067    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:09.740066    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:09.854063    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.854063    6576 retry.go:31] will retry after 1.718952919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.926061    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.926061    6576 retry.go:31] will retry after 2.401961347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.960056    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.961057    6576 retry.go:31] will retry after 3.751594778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:10.049061    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:10.549298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.049797    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.550139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.577133    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:11.663155    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:11.663155    6576 retry.go:31] will retry after 4.120114825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.049572    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:12.333014    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:12.419653    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.419653    6576 retry.go:31] will retry after 2.740389125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.549673    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.050128    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.549901    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.717839    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:13.806807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:13.806807    6576 retry.go:31] will retry after 4.752661147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:14.050521    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:14.551720    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.050682    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.165926    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:15.256271    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.256271    6576 retry.go:31] will retry after 4.534312748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.549805    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.787818    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:15.865098    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.865628    6576 retry.go:31] will retry after 5.383695211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:16.050434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:16.549442    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.049923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.550083    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.049667    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.551343    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.565349    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:18.647263    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:18.647263    6576 retry.go:31] will retry after 8.382323881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.050424    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.796280    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:19.904265    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.904265    6576 retry.go:31] will retry after 5.117792571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:20.052293    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:20.550380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.052677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.255736    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:21.356356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.356356    6576 retry.go:31] will retry after 8.875197166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.550333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.049310    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.550338    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.050244    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.551039    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.050874    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.550399    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:25.027043    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:25.050989    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:25.159593    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.159593    6576 retry.go:31] will retry after 7.802785807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.553440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.050359    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.551986    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:27.034606    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:27.050924    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:27.141503    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.141551    6576 retry.go:31] will retry after 13.674183061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.553694    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.049210    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.550842    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.051091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.549571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.051474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.237147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:30.345143    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.345143    6576 retry.go:31] will retry after 18.684554823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.552505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.050974    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.550315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.053025    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.550841    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.967139    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:33.050008    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:33.074001    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.074001    6576 retry.go:31] will retry after 21.457353412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.550375    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.053598    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.050034    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.050947    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.552933    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.049827    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.551205    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.050234    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.552156    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.050748    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.549737    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.050549    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.550949    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.819283    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:40.946292    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:40.946292    6576 retry.go:31] will retry after 18.180546633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:41.051295    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:41.551923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.051010    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.550802    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.050090    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.549595    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.050323    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.551060    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.050284    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.549318    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.049045    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.550390    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.050869    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.549920    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.050040    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:49.037573    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:49.050392    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:49.132808    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.132808    6576 retry.go:31] will retry after 12.282235903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.549952    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.052465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.550412    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.053026    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.551123    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.050959    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.550243    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.051085    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.550766    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.053585    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.537931    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:54.551106    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:54.662326    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:54.662326    6576 retry.go:31] will retry after 25.982171867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:55.050927    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:55.551197    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.049847    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.551717    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.050571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.552306    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.050495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.550960    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.050091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.133373    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:59.223117    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.223117    6576 retry.go:31] will retry after 23.551015037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.551231    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.047738    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.550465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.051875    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.420389    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:01.505728    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.505728    6576 retry.go:31] will retry after 17.206812229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.551821    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.051028    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.550994    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.051369    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.550326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:03.585938    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.585938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:03.590134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:03.617879    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.617879    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:03.624332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:03.651940    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.651940    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:03.656120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:03.685733    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.685733    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:03.690030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:03.719658    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.719713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:03.723576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:03.755797    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.755797    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:03.760966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:03.789461    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.789461    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:03.793178    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:03.823147    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.823147    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:03.823147    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:03.823679    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:03.890829    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:03.890829    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:03.937573    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:03.937573    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:04.028268    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:04.028268    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:04.028268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:04.054265    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:04.054265    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.624597    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:06.650113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:06.681568    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.682088    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:06.685527    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:06.715181    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.715181    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:06.718768    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:06.748649    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.748692    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:06.752313    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:06.783519    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.783582    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:06.787257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:06.817858    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.817858    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:06.821703    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:06.854241    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.854241    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:06.857773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:06.888901    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.888901    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:06.894071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:06.923675    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.923675    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:06.923675    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:06.923675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.974113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:06.974166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:07.037689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:07.037689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:07.080588    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:07.080588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:07.171034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:07.171067    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:07.171067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:09.706054    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:09.732108    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:09.767273    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.767300    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:09.770837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:09.802479    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.802550    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:09.806320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:09.835537    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.835537    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:09.841566    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:09.874578    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.874578    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:09.878148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:09.906942    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.907017    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:09.910154    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:09.941197    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.941197    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:09.945133    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:09.974591    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.974591    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:09.978698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:10.007749    6576 logs.go:282] 0 containers: []
	W1205 08:05:10.007749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:10.007749    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:10.007749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:10.044236    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:10.044236    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:10.130995    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:10.130995    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:10.130995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:10.158359    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:10.158945    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:10.209053    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:10.209053    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:12.782787    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:12.809043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:12.839958    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.839958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:12.845180    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:12.876657    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.876720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:12.880739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:12.908227    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.908227    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:12.912011    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:12.942400    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.942449    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:12.945431    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:12.973155    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.973155    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:12.976739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:13.004259    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.004259    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:13.008151    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:13.038225    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.038225    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:13.041692    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:13.070500    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.070500    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:13.070500    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:13.070500    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:13.134608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:13.134608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:13.173994    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:13.173994    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:13.270602    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:13.270665    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:13.270665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:13.299297    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:13.299297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:15.870600    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:15.895506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:15.927013    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.927013    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:15.930717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:15.959875    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.959941    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:15.963955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:15.992862    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.992862    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:15.996303    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:16.023966    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.023966    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:16.027786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:16.058698    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.058698    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:16.065246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:16.094826    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.094826    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:16.098650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:16.144774    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.144820    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:16.148422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:16.177296    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.177296    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:16.177296    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:16.177296    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:16.242225    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:16.242225    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:16.283778    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:16.283778    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:16.378623    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:16.378623    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:16.378623    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:16.408736    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:16.409256    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:18.719251    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:18.815541    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:18.815541    6576 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:18.959261    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:18.983847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:19.016048    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.016048    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:19.022913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:19.054693    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.054752    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:19.058555    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:19.087342    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.087342    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:19.090772    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:19.118199    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.118199    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:19.121567    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:19.151346    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.151346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:19.155305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:19.186521    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.186611    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:19.190219    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:19.220730    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.220730    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:19.225064    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:19.255890    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.256013    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:19.256013    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:19.256013    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:19.324476    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:19.324476    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:19.362802    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:19.362802    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:19.443537    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:19.444546    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:19.444546    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:19.474585    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:19.474647    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:20.651307    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:20.735190    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:20.735294    6576 retry.go:31] will retry after 27.405422909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.034778    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:22.060808    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:22.093037    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.093111    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:22.097193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:22.124988    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.125036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:22.128496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:22.157896    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.157947    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:22.161826    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:22.190808    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.190839    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:22.194900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:22.227226    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.227346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:22.230966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:22.260811    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.260861    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:22.264784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:22.295222    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.295331    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:22.302135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:22.343045    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.343116    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:22.343116    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:22.343116    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:22.394026    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:22.394026    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:22.457078    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:22.457078    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:22.498385    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:22.498434    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:22.581112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:22.581112    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:22.581112    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:22.780060    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:05:22.859804    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.859804    6576 retry.go:31] will retry after 21.036491608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:25.113006    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:25.148820    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:25.186604    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.186604    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:25.191401    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:25.223786    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.223867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:25.227359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:25.262253    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.262310    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:25.266030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:25.298397    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.298433    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:25.303771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:25.334112    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.334112    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:25.338565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:25.370125    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.370206    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:25.374513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:25.406130    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.406219    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:25.410417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:25.442663    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.442742    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:25.442742    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:25.442742    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:25.479786    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:25.479786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:25.573308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:25.573308    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:25.573308    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:25.599667    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:25.599667    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:25.650617    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:25.650617    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.218354    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:28.243705    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:28.279022    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.279022    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:28.283525    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:28.313798    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.313798    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:28.318172    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:28.347700    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.347700    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:28.351701    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:28.381257    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.381341    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:28.384917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:28.416041    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.416041    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:28.419541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:28.447349    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.447349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:28.451684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:28.479275    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.479307    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:28.483095    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:28.511115    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.511187    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:28.511187    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:28.511237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.574706    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:28.574706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:28.615541    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:28.615541    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:28.709604    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:28.709604    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:28.709604    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:28.738815    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:28.738815    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.300476    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:31.328202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:31.357921    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.357958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:31.361905    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:31.390844    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.390926    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:31.395488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:31.426488    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.426570    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:31.430048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:31.461632    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.461687    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:31.465105    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:31.492594    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.492657    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:31.496042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:31.523806    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.523834    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:31.527758    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:31.557959    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.558020    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:31.561776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:31.588451    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.588485    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:31.588513    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:31.588535    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:31.675984    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:31.675984    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:31.675984    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:31.706483    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:31.706567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.753154    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:31.753677    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:31.813379    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:31.813379    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.359731    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:34.386737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:34.416273    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.416306    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:34.419220    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:34.452145    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.452661    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:34.456139    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:34.486541    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.486593    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:34.489738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:34.520642    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.520642    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:34.524007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:34.556848    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.556848    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:34.560551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:34.589976    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.589976    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:34.594061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:34.623871    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.623871    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:34.627661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:34.655428    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.655428    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:34.655428    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:34.655428    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.693248    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:34.693248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:34.782095    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:34.782095    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:34.782095    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:34.809243    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:34.809243    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:34.859486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:34.859486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.427533    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:37.454695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:37.485702    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.485702    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:37.489329    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:37.522074    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.522074    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:37.525283    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:37.555534    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.555534    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:37.559473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:37.589923    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.589923    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:37.593340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:37.625230    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.625230    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:37.628764    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:37.658722    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.658722    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:37.661870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:37.693003    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.693003    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:37.696992    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:37.726216    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.726286    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:37.726286    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:37.726333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.791305    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:37.791305    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:37.829600    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:37.829600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:37.920892    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:37.920892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:37.920892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:37.947989    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:37.947989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:40.501988    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:40.527784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:40.563590    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.563590    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:40.567375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:40.598332    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.598332    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:40.602019    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:40.629289    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.629289    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:40.633378    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:40.660574    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.660630    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:40.664275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:40.691063    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.691063    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:40.694694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:40.723611    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.723667    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:40.726975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:40.755155    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.755155    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:40.759134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:40.793723    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.793723    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:40.793723    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:40.793723    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:40.831198    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:40.831198    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:40.925587    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:40.925587    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:40.925587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:40.954081    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:40.954114    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:41.007048    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:41.007096    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:43.582160    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:43.607539    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:43.638277    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.638277    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:43.642375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:43.675099    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.675099    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:43.678089    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:43.706803    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.706803    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:43.713114    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:43.740522    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.740522    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:43.744411    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:43.773724    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.773780    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:43.777763    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:43.803962    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.803962    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:43.807698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:43.839559    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.839559    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:43.843918    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:43.876174    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.876252    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:43.876252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:43.876252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:43.902671    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:05:43.934973    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:43.934973    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 08:05:43.999146    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:43.999146    6576 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:44.032735    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:44.033740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:44.075384    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:44.075384    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:44.157223    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:44.157223    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:44.157223    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:46.691333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:46.717072    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:46.748595    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.748595    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:46.752218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:46.780374    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.780374    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:46.783922    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:46.815066    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.815066    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:46.818942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:46.847510    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.847563    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:46.851012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:46.883362    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.883465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:46.886941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:46.916379    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.916451    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:46.920641    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:46.949114    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.949114    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:46.953549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:46.983164    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.983164    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:46.983164    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:46.983164    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:47.022255    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:47.022255    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:47.111784    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:47.111860    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:47.111860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:47.138559    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:47.138559    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:47.188823    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:47.189346    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:48.147422    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:48.239875    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:48.239875    6576 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:48.242898    6576 out.go:179] * Enabled addons: 
	I1205 08:05:48.245836    6576 addons.go:530] duration metric: took 1m45.1017438s for enable addons: enabled=[]
	I1205 08:05:49.757493    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:49.785573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:49.818757    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.818757    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:49.822359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:49.849919    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.849919    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:49.853892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:49.881451    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.881451    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:49.884508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:49.916549    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.916599    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:49.922025    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:49.955857    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.955857    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:49.959871    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:49.992747    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.992747    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:49.997745    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:50.027985    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.027985    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:50.032696    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:50.066315    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.066315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:50.066315    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:50.066315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:50.162764    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:50.162764    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:50.162764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:50.190807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:50.190807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:50.244357    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:50.244357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:50.306832    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:50.306832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:52.850828    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:52.881404    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:52.914164    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.914164    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:52.919056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:52.946339    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.946339    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:52.950249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:52.977159    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.977159    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:52.981587    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:53.011126    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.011126    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:53.016170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:53.050900    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.050900    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:53.055929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:53.086492    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.086492    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:53.091422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:53.123587    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.123587    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:53.126586    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:53.155525    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.155525    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:53.155525    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:53.155525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:53.220198    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:53.221197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:53.261683    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:53.261683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:53.355432    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:53.355432    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:53.355432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:53.386521    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:53.386521    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:55.947613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:55.973795    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:56.007916    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.007916    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:56.011792    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:56.045094    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.045094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:56.048513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:56.082501    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.082501    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:56.086603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:56.116918    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.117005    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:56.120916    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:56.150716    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.150716    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:56.154101    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:56.186882    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.186882    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:56.190500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:56.223741    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.223741    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:56.227290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:56.255902    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.255902    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:56.255902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:56.255902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:56.285180    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:56.285180    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:56.333650    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:56.333650    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:56.393332    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:56.393332    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:56.432841    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:56.432841    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:56.521419    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.025923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:59.056473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:59.091893    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.091909    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:59.095650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:59.128079    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.128185    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:59.131611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:59.159655    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.159655    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:59.163348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:59.192422    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.192422    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:59.196339    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:59.226737    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.226737    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:59.230776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:59.258194    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.258194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:59.261784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:59.292592    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.292592    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:59.296370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:59.323764    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.323764    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:59.323764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:59.323764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:59.375689    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:59.376207    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:59.440586    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:59.440586    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:59.479856    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:59.479856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:59.578161    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.578161    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:59.578161    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.111153    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:02.137611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:02.172231    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.172231    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:02.176271    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:02.208274    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.208274    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:02.211990    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:02.244184    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.244245    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:02.247661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:02.278388    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.278388    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:02.282228    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:02.312290    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.312290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:02.316470    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:02.345487    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.345487    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:02.349444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:02.378305    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.378305    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:02.381923    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:02.409737    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.409737    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:02.409737    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:02.409737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:02.477029    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:02.477029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:02.517422    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:02.517422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:02.605249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:02.605249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:02.605249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.632767    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:02.632828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:05.196182    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:05.221488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:05.251281    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.251355    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:05.254854    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:05.284103    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.284103    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:05.288076    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:05.315552    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.315552    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:05.319409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:05.347664    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.347664    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:05.351387    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:05.382685    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.382685    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:05.386801    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:05.416816    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.416816    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:05.421471    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:05.451265    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.451350    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:05.455129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:05.486455    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.486455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:05.486455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:05.486455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:05.548252    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:05.548252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:05.586103    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:05.586103    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:05.689902    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:05.689902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:05.689902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:05.715463    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:05.715463    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:08.298546    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:08.325694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:08.358357    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.358427    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:08.362535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:08.393631    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.393631    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:08.397365    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:08.429162    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.429162    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:08.433444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:08.464672    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.464672    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:08.467810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:08.496450    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.496450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:08.499640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:08.526246    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.526246    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:08.530507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:08.558130    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.558130    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:08.561856    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:08.590753    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.590753    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:08.590753    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:08.590753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:08.656049    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:08.656049    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:08.697268    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:08.697268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:08.794510    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:08.794510    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:08.794510    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:08.839662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:08.839734    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:11.394677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:11.423727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:11.453346    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.453346    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:11.460955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:11.498834    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.498834    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:11.498834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:11.532657    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.532657    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:11.540987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:11.575759    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.575786    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:11.579561    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:11.612047    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.612102    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:11.615579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:11.644318    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.644370    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:11.648326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:11.678026    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.678026    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:11.681899    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:11.711631    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.711631    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:11.711631    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:11.711631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:11.772905    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:11.772905    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:11.814639    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:11.814639    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:11.905607    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:11.905657    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:11.905700    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:11.934717    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:11.935238    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:14.488836    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:14.512857    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:14.546571    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.546571    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:14.549903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:14.580887    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.580887    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:14.584967    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:14.630312    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.630312    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:14.633809    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:14.667373    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.667373    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:14.671026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:14.699813    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.699813    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:14.703177    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:14.734619    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.734619    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:14.739056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:14.769129    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.769129    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:14.773030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:14.803689    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.803689    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:14.803689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:14.803689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:14.841923    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:14.841923    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:14.932570    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:14.932570    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:14.932570    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:14.961067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:14.961591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:15.010912    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:15.010953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:17.575458    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:17.603741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:17.636367    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.636367    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:17.640529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:17.668380    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.668380    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:17.672111    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:17.700544    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.700544    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:17.704634    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:17.736823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.736823    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:17.741002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:17.770125    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.770125    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:17.775816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:17.812823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.812823    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:17.815683    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:17.844895    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.844895    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:17.849115    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:17.880706    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.880706    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:17.880706    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:17.880706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:17.969171    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:17.969171    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:17.969263    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:17.995396    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:17.995396    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:18.044466    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:18.044466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:18.105721    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:18.105721    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:20.651671    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:20.679273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:20.707727    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.707727    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:20.711373    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:20.741891    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.741891    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:20.746073    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:20.777260    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.777260    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:20.780520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:20.816982    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.816982    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:20.820520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:20.850461    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.850461    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:20.854205    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:20.882429    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.882429    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:20.886920    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:20.914179    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.914179    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:20.917831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:20.949708    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.949708    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:20.949708    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:20.949708    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:21.013967    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:21.013967    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:21.053946    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:21.053946    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:21.140482    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:21.141002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:21.141002    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:21.170239    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:21.170239    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:23.729627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:23.758686    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:23.791537    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.791594    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:23.796131    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:23.827894    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.827894    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:23.832419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:23.862718    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.862718    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:23.867837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:23.896272    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.896272    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:23.900193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:23.929016    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.929078    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:23.932778    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:23.962372    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.962447    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:23.966147    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:23.998472    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.998472    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:24.004351    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:24.033564    6576 logs.go:282] 0 containers: []
	W1205 08:06:24.033564    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:24.033564    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:24.033564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:24.099505    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:24.099505    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:24.139900    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:24.139900    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:24.233474    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:24.233474    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:24.233474    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:24.263408    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:24.263408    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:26.816321    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:26.841457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:26.872936    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.872992    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:26.876345    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:26.908512    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.908580    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:26.912736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:26.944068    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.944068    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:26.947603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:26.975323    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.975360    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:26.978941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:27.008708    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.008751    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:27.012371    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:27.044160    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.044225    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:27.047780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:27.078172    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.078172    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:27.081803    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:27.111287    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.111370    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:27.111370    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:27.111435    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:27.161265    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:27.161329    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:27.221473    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:27.221473    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:27.263907    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:27.263907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:27.357876    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:27.357876    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:27.357876    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:29.890252    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:29.916690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:29.946274    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.946274    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:29.950679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:29.979149    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.979149    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:29.982229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:30.010085    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.010085    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:30.014016    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:30.043254    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.043254    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:30.048048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:30.080613    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.080613    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:30.084300    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:30.114627    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.114627    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:30.118584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:30.147947    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.148009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:30.151166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:30.180743    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.180828    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:30.180828    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:30.180828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:30.244646    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:30.244646    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:30.286079    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:30.286079    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:30.376557    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:30.376557    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:30.376557    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:30.405737    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:30.405737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:32.958550    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:32.987728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:33.018308    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.018370    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:33.022062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:33.052435    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.052435    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:33.056434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:33.085355    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.085426    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:33.089343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:33.121676    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.121737    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:33.125504    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:33.157765    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.157765    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:33.161892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:33.191061    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.191061    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:33.194930    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:33.223173    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.223173    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:33.226650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:33.257481    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.257481    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:33.257481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:33.257481    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:33.301467    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:33.301467    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:33.389528    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:33.389528    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:33.389528    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:33.418631    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:33.418631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:33.465106    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:33.465185    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.034296    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:36.063459    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:36.095210    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.095210    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:36.098565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:36.127708    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.127786    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:36.131615    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:36.159964    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.159964    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:36.163771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:36.192604    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.192604    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:36.196679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:36.224877    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.224958    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:36.228553    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:36.258280    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.258280    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:36.261911    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:36.294140    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.294140    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:36.298273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:36.329657    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.329657    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:36.329657    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:36.329657    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:36.387784    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:36.387784    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.452385    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:36.452385    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:36.493394    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:36.493394    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:36.591485    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:36.591485    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:36.591567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.124474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:39.152578    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:39.183392    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.183392    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:39.187028    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:39.216193    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.216193    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:39.219743    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:39.251680    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.251759    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:39.255869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:39.283843    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.283843    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:39.287237    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:39.316021    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.316021    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:39.319015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:39.349194    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.349194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:39.352951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:39.403729    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.403729    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:39.411012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:39.442909    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.442909    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:39.442909    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:39.442909    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:39.509174    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:39.509174    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:39.550483    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:39.550483    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:39.650354    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:39.650354    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:39.650354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.676786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:39.676786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.228069    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:42.258786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:42.290791    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.290791    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:42.294739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:42.326094    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.326094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:42.329725    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:42.356052    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.356052    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:42.359752    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:42.390464    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.390464    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:42.393935    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:42.421882    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.421882    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:42.426609    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:42.457036    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.457036    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:42.460988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:42.486064    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.486064    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:42.491250    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:42.521748    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.521748    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:42.521748    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:42.521748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:42.551195    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:42.552197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.613626    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:42.613683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:42.678856    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:42.679856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:42.719297    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:42.719297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:42.811034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:45.316640    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:45.343574    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:45.372899    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.372899    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:45.376229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:45.408264    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.408264    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:45.412119    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:45.440697    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.440697    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:45.444501    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:45.471692    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.471727    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:45.475496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:45.508400    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.508450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:45.512541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:45.544177    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.544233    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:45.548858    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:45.579165    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.579165    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:45.582164    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:45.623052    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.623052    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:45.623052    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:45.623052    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:45.651554    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:45.651554    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:45.701716    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:45.701768    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:45.766248    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:45.766248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:45.806341    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:45.806341    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:45.895675    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.401571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:48.432481    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:48.466418    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.466418    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:48.471424    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:48.503617    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.503617    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:48.507677    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:48.541480    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.541480    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:48.547529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:48.579177    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.579177    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:48.585087    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:48.626465    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.626465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:48.630533    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:48.660304    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.660304    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:48.663999    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:48.694957    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.694957    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:48.699665    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:48.725908    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.725908    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:48.725908    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:48.725908    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:48.817395    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.817466    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:48.817466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:48.848226    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:48.848739    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:48.900060    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:48.900060    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:48.962797    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:48.962797    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.508647    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:51.536278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:51.573226    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.573323    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:51.578061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:51.614603    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.614603    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:51.619576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:51.647095    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.647095    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:51.652535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:51.680320    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.680369    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:51.684269    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:51.717798    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.717827    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:51.721877    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:51.750482    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.750482    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:51.754602    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:51.786216    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.786216    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:51.790834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:51.819030    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.819030    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:51.819030    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:51.819030    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:51.876069    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:51.876110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:51.938469    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:51.938469    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.980953    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:51.980953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:52.079938    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:52.079938    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:52.079938    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:54.616891    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:54.642146    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:54.675691    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.675691    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:54.679440    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:54.709522    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.709522    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:54.713343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:54.744053    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.744112    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:54.748148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:54.782163    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.782232    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:54.786128    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:54.817067    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.817067    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:54.820867    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:54.850003    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.850003    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:54.854439    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:54.882517    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.882566    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:54.886475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:54.917057    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.917057    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:54.917057    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:54.917057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:54.982333    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:54.982333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:55.023534    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:55.023534    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:55.136747    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:55.136823    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:55.136823    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:55.169237    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:55.169237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:57.723958    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:57.750382    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:57.784932    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.784932    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:57.788837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:57.815350    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.815350    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:57.819773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:57.850513    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.850513    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:57.854585    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:57.885405    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.885405    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:57.889340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:57.917143    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.917143    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:57.921061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:57.947843    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.947843    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:57.951577    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:57.983169    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.983169    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:57.986925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:58.016381    6576 logs.go:282] 0 containers: []
	W1205 08:06:58.016381    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:58.016381    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:58.016381    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:58.081766    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:58.081766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:58.122021    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:58.122021    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:58.216654    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:58.216654    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:58.216654    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:58.245369    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:58.245369    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:00.814255    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:00.841335    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:00.870336    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.870336    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:00.874294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:00.905321    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.905321    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:00.908814    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:00.940896    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.940896    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:00.944651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:00.975783    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.975855    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:00.979485    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:01.007166    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.007166    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:01.011052    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:01.038708    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.038708    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:01.043766    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:01.072944    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.072944    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:01.076562    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:01.104574    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.104623    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:01.104665    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:01.104665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:01.169748    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:01.169748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:01.210259    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:01.210259    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:01.310310    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:01.310310    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:01.310310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:01.336589    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:01.336589    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:03.889510    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:03.919078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:03.953291    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.953291    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:03.956276    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:03.986975    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.986975    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:03.991157    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:04.022935    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.022935    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:04.026117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:04.058273    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.058312    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:04.061868    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:04.093136    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.093136    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:04.096666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:04.122322    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.122349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:04.126167    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:04.158513    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.158545    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:04.161969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:04.190492    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.190569    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:04.190569    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:04.190569    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:04.259062    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:04.259062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:04.299558    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:04.299558    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:04.393556    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:04.393644    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:04.393644    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:04.420122    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:04.420122    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:06.976110    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:07.001980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:07.033975    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.033975    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:07.040090    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:07.069823    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.069823    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:07.074015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:07.103072    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.103072    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:07.107448    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:07.138770    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.138770    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:07.142987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:07.174660    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.174660    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:07.178913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:07.209719    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.209719    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:07.215472    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:07.243539    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.243539    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:07.248737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:07.279448    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.279448    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:07.279448    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:07.279448    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:07.345481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:07.346489    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:07.384275    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:07.384275    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:07.479588    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:07.479588    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:07.479588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:07.506786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:07.506786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:10.078099    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:10.103951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:10.139034    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.139034    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:10.142691    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:10.174629    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.174629    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:10.178323    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:10.206817    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.206817    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:10.210968    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:10.239729    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.239820    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:10.245043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:10.277712    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.277712    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:10.283741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:10.315362    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.315362    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:10.318268    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:10.346693    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.346693    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:10.350670    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:10.379081    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.379081    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:10.379081    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:10.379081    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:10.443299    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:10.443299    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:10.482497    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:10.482497    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:10.567024    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:10.567024    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:10.567024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:10.596635    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:10.596635    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:13.157670    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:13.186965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:13.222698    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.222730    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:13.226690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:13.261914    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.261957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:13.265780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:13.294590    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.294590    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:13.299066    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:13.329216    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.329216    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:13.334474    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:13.366263    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.366290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:13.369870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:13.398379    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.398379    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:13.402396    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:13.430465    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.430465    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:13.434253    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:13.462873    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.462905    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:13.462905    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:13.462949    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:13.525954    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:13.526955    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:13.566284    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:13.567284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:13.656971    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:13.656971    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:13.656971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:13.684284    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:13.684284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.241440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:16.268513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:16.302653    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.302653    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:16.306429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:16.337387    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.337387    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:16.342004    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:16.371449    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.371449    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:16.376376    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:16.406912    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.406912    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:16.410777    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:16.438875    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.438875    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:16.442983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:16.470299    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.470299    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:16.474336    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:16.504067    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.504067    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:16.508174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:16.536869    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.536869    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:16.536869    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:16.536869    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:16.624673    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:16.624703    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:16.624755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:16.653894    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:16.653894    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.701985    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:16.701985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:16.763148    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:16.763148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.307232    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:19.334513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:19.371034    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.371140    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:19.375038    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:19.403110    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.403186    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:19.407168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:19.435904    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.435904    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:19.440294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:19.470700    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.470700    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:19.474611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:19.502846    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.502915    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:19.506400    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:19.540483    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.540483    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:19.544695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:19.576470    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.576501    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:19.579834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:19.609587    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.609587    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:19.609587    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:19.609587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.653000    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:19.653000    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:19.747787    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:19.747787    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:19.747787    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:19.774804    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:19.774804    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:19.825222    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:19.825338    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.394074    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:22.419163    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:22.454202    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.454202    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:22.457716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:22.487462    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.487615    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:22.491427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:22.522398    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.522398    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:22.526148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:22.554536    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.554536    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:22.558447    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:22.590329    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.590401    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:22.595088    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:22.626553    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.626553    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:22.630372    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:22.658911    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.658911    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:22.662715    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:22.692369    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.692444    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:22.692468    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:22.692468    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.759391    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:22.759391    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:22.801415    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:22.801415    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:22.891643    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:22.891710    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:22.891738    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:22.922662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:22.922662    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:25.480645    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:25.506403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:25.536534    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.536600    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:25.540233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:25.568373    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.568373    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:25.572581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:25.604196    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.604196    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:25.608476    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:25.639923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.640007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:25.643813    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:25.673923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.673923    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:25.677542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:25.709156    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.709156    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:25.712910    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:25.744371    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.744371    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:25.750463    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:25.778113    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.778113    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:25.778113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:25.778113    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:25.842953    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:25.842953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:25.881310    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:25.881310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:25.976920    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:25.976920    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:25.976920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:26.005828    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:26.005889    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:28.568522    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:28.594981    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:28.628025    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.628025    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:28.631569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:28.661047    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.661047    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:28.664662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:28.692667    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.692667    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:28.696624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:28.725878    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.725944    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:28.730056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:28.758073    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.758129    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:28.761794    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:28.788812    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.788812    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:28.793030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:28.839778    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.839778    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:28.843937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:28.873288    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.873288    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:28.873288    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:28.873288    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:28.937414    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:28.937414    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:28.975610    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:28.975610    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:29.110286    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:29.110286    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:29.110286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:29.140120    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:29.140120    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:31.695315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:31.723717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:31.755093    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.755155    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:31.758672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:31.786260    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.786260    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:31.790917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:31.817450    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.817450    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:31.822438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:31.852769    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.852788    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:31.856218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:31.885715    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.885715    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:31.890036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:31.919240    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.919240    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:31.924888    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:31.956860    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.956860    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:31.960848    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:31.989055    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.989055    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:31.989055    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:31.989055    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:32.055751    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:32.055751    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:32.091848    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:32.091848    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:32.183494    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:32.183494    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:32.183494    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:32.211020    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:32.211056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:34.770702    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:34.796134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:34.830020    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.830052    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:34.833506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:34.860829    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.860829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:34.864718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:34.895302    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.895302    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:34.899305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:34.928933    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.928933    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:34.935599    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:34.964256    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.964280    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:34.967945    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:34.995571    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.995571    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:35.001155    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:35.038603    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.038603    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:35.042249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:35.075025    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.075025    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:35.075025    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:35.075025    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:35.136020    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:35.136020    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:35.198233    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:35.198233    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:35.236713    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:35.236713    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:35.327635    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:35.327659    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:35.327659    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:37.859618    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:37.890074    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:37.922724    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.922724    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:37.926571    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:37.959720    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.959720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:37.963770    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:37.991602    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.991602    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:37.995673    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:38.023771    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.023771    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:38.030170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:38.061676    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.061676    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:38.065660    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:38.116492    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.116542    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:38.122475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:38.151483    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.151483    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:38.155624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:38.184512    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.184512    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:38.184512    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:38.184512    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:38.221972    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:38.221972    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:38.315283    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:38.315283    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:38.315283    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:38.342209    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:38.342209    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:38.391392    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:38.391470    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:40.955418    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:40.982062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:41.015938    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.015938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:41.019996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:41.049917    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.049917    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:41.052925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:41.084946    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.084946    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:41.088068    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:41.120218    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.120297    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:41.123688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:41.152948    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.152948    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:41.156508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:41.183795    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.183795    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:41.187681    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:41.217097    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.217097    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:41.221130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:41.252354    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.252354    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:41.252354    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:41.252354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:41.345903    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:41.345903    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:41.345903    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:41.373149    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:41.373149    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:41.423553    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:41.423553    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:41.485144    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:41.485144    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.029139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:44.056384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:44.087995    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.088078    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:44.091865    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:44.118934    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.118934    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:44.122494    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:44.150822    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.150864    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:44.154454    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:44.183401    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.183401    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:44.187086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:44.214588    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.214644    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:44.217896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:44.249548    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.249548    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:44.253290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:44.281230    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.281230    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:44.284996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:44.314362    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.314426    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:44.314426    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:44.314426    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:44.378166    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:44.378166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.420024    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:44.420024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:44.510942    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:44.510942    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:44.510942    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:44.539432    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:44.539482    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:47.095962    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:47.121976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:47.155042    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.155042    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:47.159040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:47.188768    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.188768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:47.192847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:47.220500    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.220500    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:47.224299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:47.252483    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.252483    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:47.256264    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:47.285852    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.285852    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:47.290573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:47.319383    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.319450    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:47.323007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:47.353203    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.353203    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:47.357241    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:47.385498    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.385498    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:47.385498    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:47.385498    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:47.449686    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:47.449686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:47.490407    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:47.490407    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:47.577868    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:47.577868    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:47.577868    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:47.604652    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:47.604652    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:50.157279    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:50.184328    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:50.218852    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.218852    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:50.222438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:50.250551    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.250571    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:50.254169    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:50.285371    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.285424    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:50.289741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:50.320093    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.320093    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:50.323845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:50.357038    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.357084    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:50.360291    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:50.389753    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.389829    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:50.392859    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:50.423710    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.423710    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:50.427343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:50.454456    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.454456    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:50.454456    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:50.454456    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:50.516581    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:50.516581    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:50.555412    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:50.555412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:50.648402    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:50.648402    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:50.648402    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:50.673701    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:50.673701    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:53.230542    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:53.256707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:53.290781    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.290781    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:53.294254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:53.326261    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.326261    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:53.329838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:53.359630    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.359630    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:53.364896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:53.396046    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.396046    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:53.400120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:53.428713    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.428713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:53.432409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:53.462479    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.462479    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:53.467583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:53.495306    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.495306    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:53.499565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:53.530622    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.530622    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:53.530622    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:53.530622    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:53.593183    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:53.593183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:53.633807    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:53.633807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:53.721016    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:53.721016    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:53.721016    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:53.748333    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:53.748442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.315862    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:56.341452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:56.374032    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.374063    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:56.377843    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:56.408635    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.408698    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:56.412330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:56.442083    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.442083    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:56.445380    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:56.473679    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.473749    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:56.477263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:56.506107    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.506156    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:56.510975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:56.538958    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.539022    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:56.542581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:56.572303    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.572303    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:56.576375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:56.604073    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.604073    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:56.604073    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:56.604145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:56.641552    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:56.641552    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:56.734944    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:56.735002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:56.735046    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:56.770367    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:56.770412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.826378    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:56.826378    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.393300    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:59.417617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:59.452220    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.452220    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:59.456092    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:59.484787    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.484787    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:59.488348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:59.516670    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.516670    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:59.521214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:59.548048    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.548048    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:59.551862    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:59.576869    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.576869    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:59.581825    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:59.610579    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.610579    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:59.614523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:59.642507    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.642507    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:59.646397    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:59.675062    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.675062    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:59.675062    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:59.675062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.739704    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:59.739704    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:59.782363    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:59.782363    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:59.876076    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:59.876076    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:59.876076    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:59.903005    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:59.903005    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:02.456978    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:02.483895    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:02.516374    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.516374    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:02.520443    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:02.553066    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.553148    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:02.556844    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:02.585220    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.585220    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:02.589183    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:02.620655    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.620655    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:02.625389    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:02.659292    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.659369    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:02.662727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:02.690972    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.690972    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:02.694944    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:02.723751    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.723797    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:02.727357    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:02.764750    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.764750    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:02.764750    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:02.764750    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:02.834733    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:02.834733    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:02.873432    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:02.873432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:02.963503    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:02.963503    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:02.963503    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:02.992067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:02.992067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:05.547340    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:05.572946    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:05.605473    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.605473    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:05.609479    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:05.639072    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.639072    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:05.642702    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:05.674126    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.674174    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:05.678318    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:05.710378    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.710378    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:05.713988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:05.743263    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.743263    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:05.748802    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:05.777467    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.777467    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:05.781993    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:05.816147    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.816147    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:05.820044    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:05.849173    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.849173    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:05.849173    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:05.849173    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:05.937771    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:05.937771    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:05.937771    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:05.965110    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:05.965110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:06.012927    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:06.012927    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:06.076287    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:06.076287    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:08.621402    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:08.647297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:08.678598    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.678679    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:08.681866    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:08.710779    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.710856    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:08.714554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:08.745379    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.745379    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:08.750135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:08.785796    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.785840    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:08.791900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:08.823728    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.823778    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:08.827659    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:08.858652    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.858726    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:08.862304    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:08.893238    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.893287    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:08.896783    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:08.927578    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.927578    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:08.927578    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:08.927578    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:08.990752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:08.990752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:09.030509    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:09.030509    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:09.116112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:09.116629    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:09.116629    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:09.148307    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:09.148307    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:11.720341    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:11.750190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:11.784223    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.784247    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:11.789837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:11.819184    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.819184    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:11.824438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:11.852058    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.852058    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:11.857984    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:11.888391    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.888391    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:11.891707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:11.921973    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.921973    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:11.925426    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:11.953845    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.953845    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:11.957863    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:11.987150    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.987236    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:11.990921    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:12.018843    6576 logs.go:282] 0 containers: []
	W1205 08:08:12.018895    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:12.018895    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:12.018918    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:12.048523    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:12.048523    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:12.099490    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:12.099490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:12.163368    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:12.163368    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:12.204867    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:12.204867    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:12.290894    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:14.795945    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:14.821749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:14.851399    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.851399    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:14.855010    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:14.887370    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.887370    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:14.891117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:14.922139    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.922139    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:14.926245    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:14.954095    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.954095    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:14.959551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:14.987564    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.987564    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:14.991080    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:15.023941    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.023941    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:15.027344    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:15.056411    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.056474    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:15.059417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:15.092400    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.092400    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:15.092400    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:15.092400    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:15.119932    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:15.119932    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:15.169067    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:15.169067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:15.232603    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:15.232603    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:15.276106    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:15.276106    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:15.363421    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:17.870108    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:17.895889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:17.927528    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.927528    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:17.931166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:17.959105    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.959105    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:17.962846    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:17.994011    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.994011    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:17.998047    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:18.026606    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.026677    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:18.030234    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:18.061389    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.061389    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:18.065290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:18.096454    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.096454    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:18.100320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:18.129213    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.129213    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:18.133040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:18.160088    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.160111    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:18.160111    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:18.160111    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:18.221228    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:18.221228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:18.258886    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:18.258886    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:18.348416    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:18.348496    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:18.348525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:18.379855    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:18.379855    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:20.936239    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:20.959002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:20.990013    6576 logs.go:282] 0 containers: []
	W1205 08:08:20.990085    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:20.993773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:21.021884    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.021925    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:21.025964    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:21.054531    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.054531    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:21.058277    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:21.088997    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.089078    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:21.092631    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:21.121326    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.121360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:21.125135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:21.160429    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.160496    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:21.164226    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:21.192488    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.192557    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:21.196294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:21.228406    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.228445    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:21.228445    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:21.228495    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:21.291604    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:21.292600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:21.331218    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:21.331218    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:21.412454    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:21.412454    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:21.412454    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:21.441164    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:21.441229    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:23.994395    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:24.020275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:24.054682    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.054682    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:24.058674    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:24.089654    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.089654    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:24.093569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:24.123224    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.123224    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:24.127942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:24.155350    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.155350    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:24.159192    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:24.192652    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.192652    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:24.197194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:24.229851    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.229851    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:24.233957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:24.262158    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.262158    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:24.266478    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:24.297683    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.297766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:24.297766    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:24.297766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:24.388464    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:24.388464    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:24.388464    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:24.416764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:24.416764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:24.468678    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:24.469203    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:24.532678    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:24.532678    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.075175    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:27.104797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:27.137440    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.137440    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:27.141581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:27.171103    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.171126    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:27.174625    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:27.205068    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.205102    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:27.208711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:27.237765    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.237806    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:27.241719    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:27.269838    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.269838    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:27.273353    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:27.300835    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.300835    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:27.304633    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:27.333062    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.333062    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:27.338523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:27.366572    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.366572    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:27.366572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:27.366572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.402514    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:27.402514    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:27.499452    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:27.499452    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:27.499452    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:27.528089    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:27.528089    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:27.596881    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:27.596881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.168154    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:30.194986    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:30.228709    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.228709    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:30.233961    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:30.268256    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.268256    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:30.271667    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:30.300456    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.300519    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:30.303870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:30.335955    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.335955    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:30.339590    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:30.367829    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.367829    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:30.373123    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:30.401294    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.401327    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:30.404974    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:30.436526    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.436526    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:30.440246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:30.478544    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.478599    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:30.478599    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:30.478651    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.544716    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:30.544716    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:30.584496    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:30.584496    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:30.671308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:30.671352    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:30.671352    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:30.699029    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:30.699029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:33.251744    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:33.280500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:33.311912    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.311912    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:33.316407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:33.347966    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.347966    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:33.351341    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:33.386249    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.386249    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:33.389828    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:33.420571    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.420571    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:33.423584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:33.450599    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.450599    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:33.453949    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:33.488480    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.488480    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:33.492797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:33.523382    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.523382    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:33.526929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:33.561860    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.561860    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:33.561860    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:33.561860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:33.628425    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:33.628425    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:33.666453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:33.666453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:33.756872    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:33.756872    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:33.756872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:33.785780    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:33.785780    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:36.342322    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:36.368238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:36.399529    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.399529    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:36.402710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:36.430561    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.430561    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:36.434233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:36.461894    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.461894    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:36.466270    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:36.492354    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.492354    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:36.495668    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:36.526818    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.526818    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:36.530606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:36.564752    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.564752    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:36.569130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:36.598403    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.598403    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:36.603579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:36.635757    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.635757    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:36.635757    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:36.635757    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:36.702715    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:36.702715    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:36.740740    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:36.740740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:36.827779    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:36.827779    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:36.827779    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:36.855113    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:36.855148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.404078    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:39.428626    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:39.461540    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.461540    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:39.465369    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:39.497259    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.497368    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:39.501168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:39.532526    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.532526    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:39.537388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:39.570114    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.570114    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:39.574332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:39.607392    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.607392    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:39.611100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:39.640933    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.640933    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:39.644381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:39.673224    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.673224    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:39.678235    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:39.706766    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.706766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:39.706766    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:39.706766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:39.734527    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:39.734527    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.787138    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:39.787138    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:39.849637    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:39.849637    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:39.889331    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:39.889331    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:39.977390    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:42.481792    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:42.508550    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:42.541632    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.541632    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:42.545635    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:42.595829    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.595829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:42.601196    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:42.630888    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.630888    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:42.634929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:42.665451    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.665451    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:42.668581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:42.701244    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.701244    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:42.705368    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:42.737250    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.737250    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:42.740441    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:42.766622    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.766700    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:42.770278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:42.801486    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.801486    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:42.801486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:42.801486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:42.866794    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:42.866930    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:42.906819    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:42.906819    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:43.000226    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:43.000226    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:43.000226    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:43.027011    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:43.027011    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:45.586794    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:45.615024    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:45.642666    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.642666    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:45.646348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:45.675867    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.675867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:45.679650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:45.711785    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.711785    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:45.717449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:45.750065    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.750109    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:45.753406    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:45.782908    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.782908    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:45.786362    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:45.816309    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.816309    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:45.819889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:45.847629    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.847656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:45.850622    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:45.880676    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.880733    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:45.880759    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:45.880759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:45.943843    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:45.943843    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:45.984212    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:45.984212    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:46.071821    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:46.071821    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:46.071821    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:46.098280    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:46.098280    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:48.651285    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:48.676952    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:48.706696    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.706696    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:48.710427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:48.738766    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.738766    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:48.746145    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:48.773486    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.773486    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:48.778542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:48.805908    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.805908    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:48.809817    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:48.840360    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.840360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:48.843723    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:48.871560    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.871560    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:48.875316    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:48.903556    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.903556    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:48.908924    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:48.938455    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.938455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:48.938455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:48.938455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:49.001951    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:49.001951    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:49.042098    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:49.042098    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:49.131350    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:49.131350    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:49.131350    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:49.166759    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:49.166759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:51.724851    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:51.752650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:51.780528    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.780542    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:51.784422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:51.816577    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.816577    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:51.819989    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:51.849244    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.849244    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:51.853211    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:51.881159    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.881222    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:51.884831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:51.917237    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.917237    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:51.921202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:51.951018    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.951018    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:51.955222    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:51.982262    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.982262    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:51.986170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:52.013482    6576 logs.go:282] 0 containers: []
	W1205 08:08:52.013526    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:52.013564    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:52.013564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:52.050334    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:52.050334    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:52.144178    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:52.144178    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:52.144178    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:52.171135    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:52.171135    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:52.223993    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:52.223993    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:54.792613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:54.817042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:54.848768    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.848768    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:54.852580    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:54.881045    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.881045    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:54.885194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:54.915368    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.915368    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:54.919753    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:54.952592    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.952679    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:54.956477    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:54.989304    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.989357    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:54.992976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:55.025855    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.025855    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:55.029407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:55.059218    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.059290    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:55.063529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:55.092992    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.092992    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:55.092992    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:55.092992    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:55.201249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:55.201249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:55.201249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:55.228877    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:55.228907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:55.286872    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:55.286872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:55.357844    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:55.357844    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:57.912434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:57.938621    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:57.968927    6576 logs.go:282] 0 containers: []
	W1205 08:08:57.968927    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:57.975548    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:58.003200    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.003200    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:58.006983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:58.037886    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.037886    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:58.041594    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:58.072037    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.072037    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:58.076711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:58.118201    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.118201    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:58.122059    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:58.150468    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.150468    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:58.154554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:58.186009    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.186009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:58.189676    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:58.219204    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.219204    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:58.219204    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:58.219204    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:58.283572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:58.283572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:58.322291    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:58.322291    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:58.406023    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:58.406023    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:58.406023    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:58.434361    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:58.434881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:00.986031    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:01.012520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:01.041860    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.041860    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:01.045736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:01.074168    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.074168    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:01.081136    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:01.115160    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.115160    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:01.121214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:01.152200    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.152200    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:01.155786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:01.187849    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.187849    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:01.193651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:01.220927    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.220927    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:01.225251    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:01.262648    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.262648    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:01.266549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:01.298388    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.298388    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:01.298459    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:01.298491    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:01.389098    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:01.389126    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:01.389126    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:01.418232    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:01.418232    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:01.463083    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:01.463083    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:01.528159    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:01.528159    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.078505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:04.106462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:04.136412    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.136412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:04.139845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:04.168393    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.168465    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:04.171965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:04.203281    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.203281    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:04.207129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:04.235244    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.235244    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:04.239720    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:04.271746    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.271746    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:04.279903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:04.308486    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.308486    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:04.312482    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:04.341988    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.341988    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:04.345122    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:04.378152    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.378152    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:04.378152    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:04.378152    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:04.443403    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:04.443403    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.484661    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:04.484661    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:04.574793    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:04.574793    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:04.574793    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:04.606357    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:04.606357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.162554    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:07.194738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:07.227905    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.227977    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:07.232048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:07.262861    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.262861    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:07.266595    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:07.297184    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.297184    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:07.300873    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:07.331523    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.331523    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:07.335838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:07.367893    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.367893    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:07.371282    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:07.400934    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.400934    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:07.403928    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:07.431616    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.431616    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:07.435314    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:07.469043    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.469043    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:07.469043    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:07.469043    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:07.497832    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:07.497832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.547846    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:07.547846    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:07.611682    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:07.611682    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:07.651105    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:07.651105    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:07.741756    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.247138    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:10.275755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:10.311911    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.311911    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:10.317436    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:10.347243    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.347243    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:10.353296    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:10.384412    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.384412    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:10.389236    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:10.419505    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.419505    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:10.423688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:10.451213    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.451213    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:10.457390    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:10.485001    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.485001    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:10.488370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:10.519268    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.519268    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:10.524029    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:10.551544    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.551544    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:10.551544    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:10.551544    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:10.618971    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:10.618971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:10.657753    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:10.657753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:10.751422    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.751422    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:10.751422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:10.777901    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:10.778003    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.340867    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:13.373007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:13.404147    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.404191    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:13.408078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:13.440768    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.440768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:13.444748    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:13.474390    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.474390    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:13.478381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:13.508004    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.508057    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:13.511749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:13.543789    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.543789    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:13.547384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:13.576308    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.576377    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:13.579736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:13.609792    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.609792    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:13.613298    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:13.642091    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.642091    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:13.642091    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:13.642091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:13.671624    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:13.671686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.718995    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:13.718995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:13.782056    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:13.782056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:13.821453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:13.821453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:13.928916    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.433905    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:16.459887    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:16.496160    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.496160    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:16.499639    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:16.526877    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.526877    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:16.530750    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:16.560261    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.560261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:16.563991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:16.595914    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.595914    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:16.599869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:16.627694    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.627694    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:16.632403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:16.660769    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.660769    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:16.664194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:16.692707    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.692707    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:16.698036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:16.728749    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.728749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:16.728749    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:16.728749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:16.778953    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:16.779017    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:16.841091    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:16.841091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:16.881145    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:16.881145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:16.969295    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.969332    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:16.969362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:19.502757    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:19.529429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:19.557499    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.557499    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:19.561490    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:19.590127    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.590127    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:19.594042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:19.622382    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.622382    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:19.626026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:19.653513    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.653513    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:19.656672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:19.686153    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.686153    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:19.691297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:19.720831    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.720858    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:19.724786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:19.751107    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.751107    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:19.754979    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:19.782999    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.782999    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:19.782999    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:19.782999    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:19.844801    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:19.844801    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:19.884439    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:19.884439    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:19.977224    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:19.977224    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:19.977224    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:20.007404    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:20.007404    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:22.569427    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:22.596121    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:22.628181    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.628181    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:22.632086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:22.660848    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.660848    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:22.664755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:22.694182    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.694261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:22.698085    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:22.726532    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.726600    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:22.730354    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:22.757319    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.757355    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:22.760937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:22.792791    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.792791    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:22.799388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:22.841372    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.841372    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:22.845285    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:22.879377    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.879377    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:22.879377    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:22.879377    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:22.946156    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:22.946156    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:22.990461    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:22.990461    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:23.119453    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:23.119453    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:23.119453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:23.146199    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:23.147241    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:25.703191    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:25.728570    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:25.758884    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.758884    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:25.765071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:25.792957    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.792957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:25.796556    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:25.825466    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.825466    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:25.828728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:25.857451    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.857521    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:25.861306    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:25.887700    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.887700    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:25.891071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:25.920875    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.920875    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:25.924452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:25.952908    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.952952    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:25.956305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:25.987608    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.987608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:25.987608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:25.987608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:26.027162    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:26.027162    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:26.120245    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:26.120245    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:26.120245    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:26.147670    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:26.147697    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:26.198923    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:26.198963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:28.769076    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:28.797716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:28.829859    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.829898    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:28.833257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:28.864507    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.864507    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:28.868407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:28.898827    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.898827    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:28.902971    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:28.933087    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.933087    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:28.937063    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:28.964140    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.964140    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:28.968403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:28.997620    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.997620    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:29.001779    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:29.035745    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.035745    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:29.038757    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:29.068429    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.068429    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:29.068429    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:29.068429    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:29.124688    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:29.124688    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:29.188675    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:29.188675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:29.227887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:29.227887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:29.312828    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:29.312828    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:29.312828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:31.845911    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:31.878797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:31.916523    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.916523    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:31.919583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:31.950914    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.950976    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:31.954687    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:31.983555    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.983580    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:31.987603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:32.021007    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.021007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:32.025190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:32.056980    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.057033    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:32.060500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:32.104780    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.104780    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:32.108815    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:32.135429    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.135494    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:32.138969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:32.171260    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.171260    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:32.171260    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:32.171260    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:32.237752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:32.237752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:32.277887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:32.277887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:32.365810    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:32.365810    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:32.365810    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:32.392252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:32.392252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:34.943627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:34.969529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:35.010672    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.010672    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:35.015462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:35.048036    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.048036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:35.055991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:35.103005    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.103005    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:35.106890    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:35.137906    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.137906    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:35.141530    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:35.172625    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.172625    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:35.176175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:35.209474    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.209474    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:35.213175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:35.244787    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.244787    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:35.248557    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:35.275127    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.275158    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:35.275158    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:35.275158    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:35.334298    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:35.334298    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:35.373969    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:35.373969    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:35.459656    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:35.459755    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:35.459755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:35.489057    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:35.489057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:38.049404    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:38.073507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:38.101267    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.101337    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:38.104951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:38.134276    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.134276    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:38.139127    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:38.166437    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.166437    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:38.170518    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:38.199145    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.199145    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:38.202760    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:38.230466    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.230466    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:38.233640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:38.263867    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.263867    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:38.267542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:38.297791    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.297791    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:38.301874    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:38.332980    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.332980    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:38.332980    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:38.332980    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:38.396086    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:38.396086    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:38.433018    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:38.433018    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:38.516847    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:38.516847    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:38.516847    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:38.545985    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:38.545985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:41.097758    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:41.125607    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:41.156423    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.156423    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:41.159823    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:41.188324    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.188383    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:41.192299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:41.224751    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.224789    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:41.228655    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:41.257790    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.257790    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:41.261606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:41.292935    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.292999    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:41.296487    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:41.322728    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.322728    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:41.326980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:41.355569    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.355569    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:41.359412    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:41.388228    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.388228    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:41.388228    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:41.388228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:41.454094    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:41.454094    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:41.492536    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:41.492536    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:41.584848    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:41.584892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:41.584892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:41.611807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:41.611807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:44.169483    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:44.196254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:44.224412    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.224412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:44.229628    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:44.257724    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.257724    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:44.262355    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:44.289872    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.289926    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:44.293506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:44.321891    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.321891    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:44.325045    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:44.354424    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.354424    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:44.357980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:44.388960    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.388960    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:44.392224    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:44.424484    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.424484    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:44.427710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:44.458834    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.458834    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:44.458834    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:44.458834    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:44.523336    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:44.523336    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:44.560362    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:44.560362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:44.656711    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:44.656711    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:44.656711    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:44.682009    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:44.683010    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.243380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:47.270606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:47.302678    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.302720    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:47.305835    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:47.334169    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.334213    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:47.338162    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:47.370622    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.370693    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:47.374238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:47.406764    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.406787    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:47.410449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:47.439290    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.439332    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:47.442816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:47.475239    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.475239    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:47.479100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:47.510196    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.510196    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:47.513831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:47.543315    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.543378    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:47.543378    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:47.543411    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:47.577600    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:47.577600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.651517    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:47.651517    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:47.717530    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:47.717530    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:47.757989    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:47.757989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:47.848615    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:50.354473    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:50.381662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:50.410303    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.410303    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:50.416210    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:50.443479    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.443479    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:50.447606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:50.475214    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.475214    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:50.479409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:50.508984    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.508984    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:50.513185    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:50.544532    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.544532    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:50.548200    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:50.578350    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.578350    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:50.583137    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:50.615656    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.615656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:50.619983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:50.649117    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.649117    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:50.649117    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:50.649117    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:50.678837    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:50.678837    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:50.730963    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:50.730963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:50.797442    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:50.797442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:50.839051    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:50.840050    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:50.934073    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.440116    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:53.465957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:53.497390    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.497462    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:53.501077    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:53.529488    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.529488    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:53.536331    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:53.563367    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.563367    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:53.566361    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:53.596894    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.596894    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:53.600611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:53.630623    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.630623    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:53.634434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:53.664123    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.664123    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:53.668403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:53.697948    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.697948    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:53.701419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:53.730378    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.730462    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:53.730462    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:53.730462    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:53.798465    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:53.798465    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:53.841124    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:53.841124    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:53.935344    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.936318    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:53.936318    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:53.965040    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:53.965040    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:56.520907    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:56.551718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:56.584506    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.584506    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:56.588065    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:56.618214    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.618214    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:56.622199    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:56.650798    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.650798    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:56.654367    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:56.685409    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.685440    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:56.688781    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:56.719049    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.719163    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:56.722810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:56.753646    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.753646    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:56.757666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:56.793942    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.793942    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:56.798049    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:56.827315    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.827315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:56.827315    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:56.827315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:56.893213    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:56.893213    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:56.931234    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:56.931234    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:57.020142    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:57.020142    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:57.020142    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:57.048871    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:57.048871    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:59.606004    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:59.632524    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:59.662177    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.662177    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:59.666311    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:59.701152    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.701202    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:59.704398    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:59.733278    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.733278    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:59.738174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:59.769038    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.769038    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:59.773266    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:59.814259    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.814259    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:59.818330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:59.848066    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.848066    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:59.851684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:59.880029    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.880029    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:59.884457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:59.914608    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.914608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:59.914608    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:59.914608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:59.978490    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:59.978490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:00.018881    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:00.018881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:00.109744    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:00.109744    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:00.109744    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:00.137522    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:00.137591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:02.693722    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:02.718495    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:10:02.754864    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.754864    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:10:02.758547    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:10:02.795133    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.795231    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:10:02.798914    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:10:02.828115    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.828115    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:10:02.831263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:10:02.864241    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.864241    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:10:02.867861    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:10:02.895555    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.895555    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:10:02.901617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:10:02.931756    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.931756    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:10:02.935718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:10:02.964034    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.964034    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:10:02.968113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:10:03.000080    6576 logs.go:282] 0 containers: []
	W1205 08:10:03.000080    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:10:03.000080    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:03.000080    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:03.092694    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:03.094183    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:03.094183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:03.124625    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:03.124625    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:03.178920    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:10:03.178920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:10:03.237776    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:10:03.237776    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:05.783793    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:05.810874    6576 out.go:203] 
	W1205 08:10:05.812874    6576 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 08:10:05.812874    6576 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 08:10:05.812874    6576 out.go:285] * Related issues:
	* Related issues:
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1205 08:10:05.815880    6576 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-042100 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042100
helpers_test.go:243: (dbg) docker inspect newest-cni-042100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619",
	        "Created": "2025-12-05T07:52:58.091352749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T08:03:50.023797205Z",
	            "FinishedAt": "2025-12-05T08:03:46.631173784Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hosts",
	        "LogPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619-json.log",
	        "Name": "/newest-cni-042100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-042100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042100",
	                "Source": "/var/lib/docker/volumes/newest-cni-042100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042100",
	                "name.minikube.sigs.k8s.io": "newest-cni-042100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7425ef782ce126f539b7a23248f53aee42fe4667088eea6cf367858b569563e9",
	            "SandboxKey": "/var/run/docker/netns/7425ef782ce1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62708"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62709"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62710"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62711"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62712"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "174359b7b50b3bec7b4847d3ab43850e80d128f01a95736675cb3ceba87aab04",
	                    "EndpointID": "5e8b48011f9a64464c884645b921403d03309228e61384410733ff99b4453af2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042100",
	                        "ee0c9d80d83a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (604.2333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25: (1.7174429s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-218000 sudo systemctl status crio --all --full --no-pager             │ bridge-218000  │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │                     │
	│ ssh     │ -p kubenet-218000 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo systemctl cat crio --no-pager                             │ bridge-218000  │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status docker --all --full --no-pager          │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;   │ bridge-218000  │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat docker --no-pager                          │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo crio config                                               │ bridge-218000  │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/docker/daemon.json                              │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo docker system info                                       │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p bridge-218000                                                                │ bridge-218000  │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat cri-docker --no-pager                      │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cri-dockerd --version                                    │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status containerd --all --full --no-pager      │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat containerd --no-pager                      │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /lib/systemd/system/containerd.service               │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/containerd/config.toml                          │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo containerd config dump                                   │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status crio --all --full --no-pager            │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │                     │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat crio --no-pager                            │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo crio config                                              │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p kubenet-218000                                                               │ kubenet-218000 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	W1205 08:03:44.511207    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:46.513793    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	Log file created at: 2025/12/05 08:03:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:03:48.079593    6576 out.go:360] Setting OutFile to fd 1628 ...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.133685    6576 out.go:374] Setting ErrFile to fd 1512...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.149881    6576 out.go:368] Setting JSON to false
	I1205 08:03:48.152825    6576 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13085,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:03:48.152825    6576 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:03:48.159945    6576 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:03:48.164658    6576 notify.go:221] Checking for updates...
	I1205 08:03:48.167308    6576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:48.170547    6576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:03:48.173264    6576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:03:48.177277    6576 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:03:48.179134    6576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:03:48.182963    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:48.184223    6576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:03:48.306826    6576 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:03:48.310816    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.562528    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.540004205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.565521    6576 out.go:179] * Using the docker driver based on existing profile
	I1205 08:03:48.568528    6576 start.go:309] selected driver: docker
	I1205 08:03:48.568528    6576 start.go:927] validating driver "docker" against &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.568528    6576 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:03:48.621627    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.870676    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.852383077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.870676    6576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 08:03:48.870676    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:03:48.871676    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:03:48.871676    6576 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.874674    6576 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 08:03:48.876674    6576 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:03:48.879674    6576 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:03:48.881674    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:03:48.881674    6576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 08:03:48.924123    6576 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:48.965045    6576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:03:48.965045    6576 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 08:03:49.173795    6576 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:49.174041    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 08:03:49.174210    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 08:03:49.176070    6576 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:03:49.176070    6576 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:49.176610    6576 start.go:364] duration metric: took 539.2µs to acquireMachinesLock for "newest-cni-042100"
	I1205 08:03:49.176876    6576 start.go:96] Skipping create...Using existing machine configuration
	I1205 08:03:49.176954    6576 fix.go:54] fixHost starting: 
	I1205 08:03:49.185185    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:49.467905    6576 fix.go:112] recreateIfNeeded on newest-cni-042100: state=Stopped err=<nil>
	W1205 08:03:49.468085    6576 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 08:03:46.247259    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:48.745542    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:50.273234    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:48.514113    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:50.532984    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:53.014533    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:49.492567    6576 out.go:252] * Restarting existing docker container for "newest-cni-042100" ...
	I1205 08:03:49.497575    6576 cli_runner.go:164] Run: docker start newest-cni-042100
	I1205 08:03:50.779131    6576 cli_runner.go:217] Completed: docker start newest-cni-042100: (1.2815354s)
	I1205 08:03:50.788112    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:51.139299    6576 kic.go:430] container "newest-cni-042100" state is running.
	I1205 08:03:51.164376    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:51.273747    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:51.276892    6576 machine.go:94] provisionDockerMachine start ...
	I1205 08:03:51.284394    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:51.396042    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:51.397040    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:51.397040    6576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:03:51.400042    6576 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:03:52.385305    6576 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.385658    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 08:03:52.385720    6576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.211458s
	I1205 08:03:52.385800    6576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 08:03:52.435659    6576 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.435659    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 08:03:52.435659    6576 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2613971s
	I1205 08:03:52.435659    6576 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 08:03:52.467883    6576 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.468216    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 08:03:52.468216    6576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.2939732s
	I1205 08:03:52.468216    6576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 08:03:52.472465    6576 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.472465    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 08:03:52.472465    6576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2982024s
	I1205 08:03:52.472465    6576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 08:03:52.472991    6576 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.473088    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 08:03:52.473088    6576 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2988253s
	I1205 08:03:52.473088    6576 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 08:03:52.478918    6576 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.479537    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 08:03:52.479537    6576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3052743s
	I1205 08:03:52.479537    6576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 08:03:52.488107    6576 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.489284    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 08:03:52.489284    6576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.3150206s
	I1205 08:03:52.489284    6576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 08:03:52.587256    6576 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.588098    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 08:03:52.588098    6576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.413907s
	I1205 08:03:52.588098    6576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 08:03:52.588098    6576 cache.go:87] Successfully saved all images to host disk.
	W1205 08:03:50.818460    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	I1205 08:03:53.244351    4412 pod_ready.go:94] pod "coredns-66bc5c9577-zrgxp" is "Ready"
	I1205 08:03:53.244351    4412 pod_ready.go:86] duration metric: took 21.0105368s for pod "coredns-66bc5c9577-zrgxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.250834    4412 pod_ready.go:83] waiting for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.262503    4412 pod_ready.go:94] pod "etcd-bridge-218000" is "Ready"
	I1205 08:03:53.262503    4412 pod_ready.go:86] duration metric: took 11.6685ms for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.271087    4412 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.281426    4412 pod_ready.go:94] pod "kube-apiserver-bridge-218000" is "Ready"
	I1205 08:03:53.281426    4412 pod_ready.go:86] duration metric: took 10.3388ms for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.286385    4412 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.438718    4412 pod_ready.go:94] pod "kube-controller-manager-bridge-218000" is "Ready"
	I1205 08:03:53.438718    4412 pod_ready.go:86] duration metric: took 152.3311ms for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.641268    4412 pod_ready.go:83] waiting for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.039664    4412 pod_ready.go:94] pod "kube-proxy-8r4gs" is "Ready"
	I1205 08:03:54.039664    4412 pod_ready.go:86] duration metric: took 398.3895ms for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.241161    4412 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:94] pod "kube-scheduler-bridge-218000" is "Ready"
	I1205 08:03:54.641085    4412 pod_ready.go:86] duration metric: took 399.9175ms for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:40] duration metric: took 32.4419039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:54.749081    4412 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:03:54.754768    4412 out.go:179] * Done! kubectl is now configured to use "bridge-218000" cluster and "default" namespace by default
	W1205 08:03:55.516894    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:58.012284    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:54.578463    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.578463    6576 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 08:03:54.583153    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.645702    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.646148    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.646193    6576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 08:03:54.866524    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.872867    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.933417    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.934199    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.934272    6576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:03:55.129977    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:55.129977    6576 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:03:55.129977    6576 ubuntu.go:190] setting up certificates
	I1205 08:03:55.129977    6576 provision.go:84] configureAuth start
	I1205 08:03:55.133735    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:55.190185    6576 provision.go:143] copyHostCerts
	I1205 08:03:55.190185    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:03:55.190185    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:03:55.190984    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:03:55.191986    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:03:55.191986    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:03:55.192251    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:03:55.193178    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:03:55.193178    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:03:55.193462    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:03:55.194234    6576 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 08:03:55.277216    6576 provision.go:177] copyRemoteCerts
	I1205 08:03:55.282373    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:03:55.285821    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.350220    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:55.476652    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:03:55.511250    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:03:55.546706    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 08:03:55.583614    6576 provision.go:87] duration metric: took 453.6304ms to configureAuth
	I1205 08:03:55.583614    6576 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:03:55.585275    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:55.589206    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.651189    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.652212    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.652246    6576 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:03:55.836329    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:03:55.837449    6576 ubuntu.go:71] root file system type: overlay
	I1205 08:03:55.837646    6576 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:03:55.841558    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.910453    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.911069    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.911069    6576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:03:56.123635    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:03:56.128031    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.191540    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:56.191765    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:56.191765    6576 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:03:56.396364    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:56.396364    6576 machine.go:97] duration metric: took 5.1193899s to provisionDockerMachine
	I1205 08:03:56.396364    6576 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 08:03:56.396897    6576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:03:56.402233    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:03:56.406223    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.460168    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.609105    6576 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:03:56.617925    6576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:03:56.617925    6576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:03:56.618732    6576 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:03:56.623542    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:03:56.637899    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:03:56.671787    6576 start.go:296] duration metric: took 274.8468ms for postStartSetup
	I1205 08:03:56.675921    6576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:03:56.678948    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.735289    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.884826    6576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:03:56.893835    6576 fix.go:56] duration metric: took 7.7168367s for fixHost
	I1205 08:03:56.893835    6576 start.go:83] releasing machines lock for "newest-cni-042100", held for 7.7169474s
	I1205 08:03:56.896826    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:56.959384    6576 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:03:56.965413    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.966255    6576 ssh_runner.go:195] Run: cat /version.json
	I1205 08:03:56.973872    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:57.022198    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:57.026201    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	W1205 08:03:57.148711    6576 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:03:57.162212    6576 ssh_runner.go:195] Run: systemctl --version
	I1205 08:03:57.181097    6576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:03:57.193288    6576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:03:57.197753    6576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:03:57.214357    6576 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 08:03:57.214357    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.214357    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.214357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:57.242461    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:03:57.262818    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 08:03:57.264705    6576 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:03:57.264749    6576 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:03:57.282712    6576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:03:57.286891    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:57.310466    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.333091    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:57.356105    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.377603    6576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:57.401090    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:57.423330    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:57.445407    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:57.472206    6576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:57.488210    6576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:57.505210    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:57.657790    6576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:57.802417    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.802417    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.807146    6576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:57.832467    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.857712    6576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:57.930272    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.960276    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:57.984286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:58.017277    6576 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:58.032288    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:58.048281    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:03:58.077282    6576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:58.275290    6576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:58.457293    6576 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:58.457293    6576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:58.486286    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:58.509287    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:58.648318    6576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:04:00.173930    6576 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5255881s)
	I1205 08:04:00.177929    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:04:00.201541    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:04:00.228851    6576 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 08:04:00.259044    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:00.283032    6576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:04:00.429299    6576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:04:00.593446    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.738544    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:04:00.766865    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:04:00.791407    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.930315    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:04:01.041317    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:01.059628    6576 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:04:01.064630    6576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:04:01.072635    6576 start.go:564] Will wait 60s for crictl version
	I1205 08:04:01.076636    6576 ssh_runner.go:195] Run: which crictl
	I1205 08:04:01.090615    6576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:04:01.132099    6576 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:04:01.136068    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.182106    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.227459    6576 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 08:04:01.231071    6576 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 08:04:01.375969    6576 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:04:01.379962    6576 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:04:01.387350    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.408320    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:01.468320    6576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 08:04:00.335905    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:00.512126    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:04:03.018493    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:04:01.471323    6576 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:04:01.471323    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:04:01.475324    6576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:04:01.511342    6576 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:04:01.512362    6576 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:04:01.512362    6576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 08:04:01.512362    6576 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:04:01.515327    6576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:04:01.600646    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:04:01.600646    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:04:01.600646    6576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 08:04:01.600646    6576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:04:01.600646    6576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:04:01.604645    6576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 08:04:01.617663    6576 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:04:01.621646    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:04:01.634708    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 08:04:01.659457    6576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 08:04:01.681516    6576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 08:04:01.709549    6576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:04:01.717165    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.737936    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:01.886462    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:01.908845    6576 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 08:04:01.908845    6576 certs.go:195] generating shared ca certs ...
	I1205 08:04:01.908845    6576 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:01.910250    6576 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:04:01.910428    6576 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:04:01.910428    6576 certs.go:257] generating profile certs ...
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 08:04:01.911645    6576 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 08:04:01.912393    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:04:01.912708    6576 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:04:01.912818    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:04:01.913766    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:04:01.914884    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:04:01.946745    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:04:01.978670    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:04:02.020771    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:04:02.052789    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 08:04:02.083785    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:04:02.111686    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:04:02.138106    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:04:02.167957    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:04:02.197699    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:04:02.228974    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:04:02.258542    6576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:04:02.283541    6576 ssh_runner.go:195] Run: openssl version
	I1205 08:04:02.296537    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.312534    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:04:02.327543    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.334539    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.339544    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.392223    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:04:02.408977    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.424981    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:04:02.439981    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.446982    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.451985    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.500175    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:04:02.518368    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.537597    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:04:02.555653    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.562656    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.566659    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.617005    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:04:02.635329    6576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:04:02.649383    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 08:04:02.697863    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 08:04:02.747535    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 08:04:02.802236    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 08:04:02.853222    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 08:04:02.901642    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 08:04:02.946962    6576 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:04:02.951256    6576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:04:02.986478    6576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:04:02.999955    6576 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 08:04:02.999955    6576 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 08:04:03.003999    6576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 08:04:03.019291    6576 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 08:04:03.022819    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.083372    6576 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.084185    6576 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042100" cluster setting kubeconfig missing "newest-cni-042100" context setting]
	I1205 08:04:03.084741    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.109144    6576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 08:04:03.128232    6576 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 08:04:03.138905    6576 kubeadm.go:602] duration metric: took 138.9481ms to restartPrimaryControlPlane
	I1205 08:04:03.138905    6576 kubeadm.go:403] duration metric: took 191.9404ms to StartCluster
	I1205 08:04:03.138905    6576 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.138905    6576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.141698    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.142419    6576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:04:03.142419    6576 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:04:03.142419    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:04:03.163290    6576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting dashboard=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon dashboard=true in "newest-cni-042100"
	W1205 08:04:03.163290    6576 addons.go:248] addon dashboard should already be in state true
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.192363    6576 out.go:179] * Verifying Kubernetes components...
	I1205 08:04:03.249622    6576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-042100"
	I1205 08:04:03.250609    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.257607    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.258609    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:03.261608    6576 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 08:04:03.264610    6576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:04:03.309607    6576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.309607    6576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:04:03.312609    6576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.312609    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:04:03.312609    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.315610    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.318607    6576 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 08:04:03.510751    7752 pod_ready.go:94] pod "coredns-66bc5c9577-gsfxl" is "Ready"
	I1205 08:04:03.510751    7752 pod_ready.go:86] duration metric: took 25.5102081s for pod "coredns-66bc5c9577-gsfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.517746    7752 pod_ready.go:83] waiting for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.529764    7752 pod_ready.go:94] pod "etcd-kubenet-218000" is "Ready"
	I1205 08:04:03.529764    7752 pod_ready.go:86] duration metric: took 12.0185ms for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.535749    7752 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.544756    7752 pod_ready.go:94] pod "kube-apiserver-kubenet-218000" is "Ready"
	I1205 08:04:03.544756    7752 pod_ready.go:86] duration metric: took 9.007ms for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.549745    7752 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.706418    7752 pod_ready.go:94] pod "kube-controller-manager-kubenet-218000" is "Ready"
	I1205 08:04:03.706418    7752 pod_ready.go:86] duration metric: took 156.6708ms for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.906896    7752 pod_ready.go:83] waiting for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.305526    7752 pod_ready.go:94] pod "kube-proxy-l9mnz" is "Ready"
	I1205 08:04:04.305526    7752 pod_ready.go:86] duration metric: took 398.0934ms for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.506453    7752 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:94] pod "kube-scheduler-kubenet-218000" is "Ready"
	I1205 08:04:04.908413    7752 pod_ready.go:86] duration metric: took 401.8894ms for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:40] duration metric: took 37.4190345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:04:05.004707    7752 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:04:05.007705    7752 out.go:179] * Done! kubectl is now configured to use "kubenet-218000" cluster and "default" namespace by default
	I1205 08:04:03.344609    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 08:04:03.344609    6576 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 08:04:03.353008    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.373762    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.389748    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.415749    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.454747    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:03.481745    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.544756    6576 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:04:03.550761    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:03.552751    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.556766    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 08:04:03.556766    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 08:04:03.561743    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.627813    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 08:04:03.627923    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 08:04:03.654463    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 08:04:03.654463    6576 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 08:04:03.731575    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 08:04:03.731654    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1205 08:04:03.751356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.751356    6576 retry.go:31] will retry after 148.467646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.754346    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	W1205 08:04:03.755354    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.755354    6576 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 08:04:03.755354    6576 retry.go:31] will retry after 202.130528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.774491    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 08:04:03.774491    6576 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 08:04:03.793803    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 08:04:03.793803    6576 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 08:04:03.828295    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 08:04:03.828351    6576 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 08:04:03.851355    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.851355    6576 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 08:04:03.876402    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.905217    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:03.957742    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.957742    6576 retry.go:31] will retry after 291.655688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.962256    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:03.992521    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.992521    6576 retry.go:31] will retry after 561.792628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.049441    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:04.057481    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.057556    6576 retry.go:31] will retry after 288.112081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.254701    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.343216    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.343216    6576 retry.go:31] will retry after 359.979776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.350062    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:04.431174    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.431174    6576 retry.go:31] will retry after 483.679942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.549772    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:04.559147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:04.642871    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.642871    6576 retry.go:31] will retry after 528.970083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.708123    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.787283    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.787283    6576 retry.go:31] will retry after 459.684582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.919229    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.004707    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.004707    6576 retry.go:31] will retry after 831.823948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.050298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.177969    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:05.252148    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:05.268807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.268914    6576 retry.go:31] will retry after 1.219301827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:05.381615    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.381684    6576 retry.go:31] will retry after 1.003502336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.548840    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.841493    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.945714    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.945714    6576 retry.go:31] will retry after 1.344373684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.051495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:06.390219    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:06.476859    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.476859    6576 retry.go:31] will retry after 916.677354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.493513    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:06.550586    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:06.586142    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.586142    6576 retry.go:31] will retry after 814.667109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.049968    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:07.295279    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:07.385161    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.385225    6576 retry.go:31] will retry after 2.309719888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.397737    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:07.404241    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.24760459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.229405263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.550637    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.050329    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:10.375252    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:04:08.551330    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.052416    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.549628    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.699045    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:09.722067    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:09.740066    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:09.854063    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.854063    6576 retry.go:31] will retry after 1.718952919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.926061    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.926061    6576 retry.go:31] will retry after 2.401961347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.960056    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.961057    6576 retry.go:31] will retry after 3.751594778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:10.049061    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:10.549298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.049797    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.550139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.577133    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:11.663155    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:11.663155    6576 retry.go:31] will retry after 4.120114825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.049572    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:12.333014    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:12.419653    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.419653    6576 retry.go:31] will retry after 2.740389125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.549673    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.050128    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.549901    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.717839    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:13.806807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:13.806807    6576 retry.go:31] will retry after 4.752661147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:14.050521    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:14.551720    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.050682    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.165926    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:15.256271    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.256271    6576 retry.go:31] will retry after 4.534312748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.549805    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.787818    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:15.865098    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.865628    6576 retry.go:31] will retry after 5.383695211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:16.050434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:16.549442    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.049923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.550083    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.049667    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:19.104488    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 08:04:19.104793    4560 node_ready.go:38] duration metric: took 6m0.001013s for node "no-preload-104100" to be "Ready" ...
	I1205 08:04:19.107356    4560 out.go:203] 
	W1205 08:04:19.110511    4560 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 08:04:19.110554    4560 out.go:285] * 
	W1205 08:04:19.112383    4560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:04:19.116573    4560 out.go:203] 
	I1205 08:04:18.551343    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.565349    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:18.647263    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:18.647263    6576 retry.go:31] will retry after 8.382323881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.050424    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.796280    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:19.904265    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.904265    6576 retry.go:31] will retry after 5.117792571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:20.052293    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:20.550380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.052677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.255736    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:21.356356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.356356    6576 retry.go:31] will retry after 8.875197166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.550333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.049310    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.550338    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.050244    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.551039    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.050874    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.550399    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:25.027043    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:25.050989    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:25.159593    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.159593    6576 retry.go:31] will retry after 7.802785807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.553440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.050359    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.551986    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:27.034606    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:27.050924    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:27.141503    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.141551    6576 retry.go:31] will retry after 13.674183061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.553694    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.049210    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.550842    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.051091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.549571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.051474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.237147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:30.345143    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.345143    6576 retry.go:31] will retry after 18.684554823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.552505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.050974    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.550315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.053025    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.550841    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.967139    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:33.050008    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:33.074001    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.074001    6576 retry.go:31] will retry after 21.457353412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.550375    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.053598    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.050034    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.050947    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.552933    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.049827    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.551205    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.050234    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.552156    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.050748    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.549737    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.050549    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.550949    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.819283    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:40.946292    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:40.946292    6576 retry.go:31] will retry after 18.180546633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:41.051295    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:41.551923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.051010    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.550802    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.050090    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.549595    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.050323    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.551060    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.050284    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.549318    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.049045    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.550390    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.050869    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.549920    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.050040    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:49.037573    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:49.050392    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:49.132808    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.132808    6576 retry.go:31] will retry after 12.282235903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.549952    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.052465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.550412    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.053026    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.551123    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.050959    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.550243    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.051085    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.550766    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.053585    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.537931    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:54.551106    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:54.662326    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:54.662326    6576 retry.go:31] will retry after 25.982171867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:55.050927    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:55.551197    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.049847    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.551717    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.050571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.552306    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.050495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.550960    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.050091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.133373    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:59.223117    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.223117    6576 retry.go:31] will retry after 23.551015037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.551231    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.047738    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.550465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.051875    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.420389    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:01.505728    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.505728    6576 retry.go:31] will retry after 17.206812229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.551821    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.051028    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.550994    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.051369    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.550326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:03.585938    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.585938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:03.590134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:03.617879    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.617879    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:03.624332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:03.651940    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.651940    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:03.656120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:03.685733    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.685733    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:03.690030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:03.719658    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.719713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:03.723576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:03.755797    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.755797    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:03.760966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:03.789461    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.789461    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:03.793178    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:03.823147    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.823147    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:03.823147    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:03.823679    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:03.890829    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:03.890829    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:03.937573    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:03.937573    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:04.028268    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:04.028268    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:04.028268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:04.054265    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:04.054265    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.624597    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:06.650113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:06.681568    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.682088    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:06.685527    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:06.715181    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.715181    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:06.718768    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:06.748649    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.748692    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:06.752313    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:06.783519    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.783582    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:06.787257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:06.817858    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.817858    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:06.821703    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:06.854241    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.854241    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:06.857773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:06.888901    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.888901    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:06.894071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:06.923675    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.923675    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:06.923675    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:06.923675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.974113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:06.974166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:07.037689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:07.037689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:07.080588    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:07.080588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:07.171034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:07.171067    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:07.171067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:09.706054    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:09.732108    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:09.767273    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.767300    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:09.770837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:09.802479    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.802550    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:09.806320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:09.835537    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.835537    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:09.841566    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:09.874578    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.874578    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:09.878148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:09.906942    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.907017    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:09.910154    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:09.941197    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.941197    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:09.945133    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:09.974591    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.974591    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:09.978698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:10.007749    6576 logs.go:282] 0 containers: []
	W1205 08:05:10.007749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:10.007749    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:10.007749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:10.044236    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:10.044236    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:10.130995    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:10.130995    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:10.130995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:10.158359    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:10.158945    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:10.209053    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:10.209053    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:12.782787    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:12.809043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:12.839958    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.839958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:12.845180    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:12.876657    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.876720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:12.880739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:12.908227    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.908227    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:12.912011    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:12.942400    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.942449    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:12.945431    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:12.973155    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.973155    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:12.976739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:13.004259    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.004259    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:13.008151    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:13.038225    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.038225    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:13.041692    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:13.070500    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.070500    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:13.070500    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:13.070500    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:13.134608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:13.134608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:13.173994    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:13.173994    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:13.270602    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:13.270665    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:13.270665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:13.299297    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:13.299297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:15.870600    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:15.895506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:15.927013    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.927013    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:15.930717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:15.959875    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.959941    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:15.963955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:15.992862    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.992862    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:15.996303    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:16.023966    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.023966    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:16.027786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:16.058698    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.058698    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:16.065246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:16.094826    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.094826    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:16.098650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:16.144774    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.144820    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:16.148422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:16.177296    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.177296    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:16.177296    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:16.177296    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:16.242225    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:16.242225    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:16.283778    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:16.283778    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:16.378623    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:16.378623    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:16.378623    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:16.408736    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:16.409256    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:18.719251    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:18.815541    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:18.815541    6576 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:18.959261    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:18.983847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:19.016048    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.016048    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:19.022913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:19.054693    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.054752    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:19.058555    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:19.087342    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.087342    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:19.090772    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:19.118199    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.118199    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:19.121567    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:19.151346    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.151346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:19.155305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:19.186521    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.186611    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:19.190219    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:19.220730    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.220730    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:19.225064    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:19.255890    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.256013    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:19.256013    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:19.256013    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:19.324476    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:19.324476    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:19.362802    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:19.362802    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:19.443537    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:19.444546    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:19.444546    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:19.474585    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:19.474647    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:20.651307    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:20.735190    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:20.735294    6576 retry.go:31] will retry after 27.405422909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.034778    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:22.060808    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:22.093037    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.093111    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:22.097193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:22.124988    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.125036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:22.128496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:22.157896    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.157947    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:22.161826    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:22.190808    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.190839    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:22.194900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:22.227226    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.227346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:22.230966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:22.260811    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.260861    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:22.264784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:22.295222    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.295331    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:22.302135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:22.343045    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.343116    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:22.343116    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:22.343116    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:22.394026    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:22.394026    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:22.457078    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:22.457078    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:22.498385    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:22.498434    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:22.581112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:22.581112    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:22.581112    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:22.780060    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:05:22.859804    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.859804    6576 retry.go:31] will retry after 21.036491608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:25.113006    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:25.148820    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:25.186604    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.186604    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:25.191401    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:25.223786    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.223867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:25.227359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:25.262253    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.262310    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:25.266030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:25.298397    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.298433    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:25.303771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:25.334112    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.334112    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:25.338565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:25.370125    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.370206    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:25.374513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:25.406130    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.406219    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:25.410417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:25.442663    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.442742    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:25.442742    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:25.442742    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:25.479786    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:25.479786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:25.573308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:25.573308    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:25.573308    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:25.599667    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:25.599667    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:25.650617    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:25.650617    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.218354    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:28.243705    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:28.279022    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.279022    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:28.283525    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:28.313798    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.313798    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:28.318172    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:28.347700    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.347700    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:28.351701    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:28.381257    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.381341    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:28.384917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:28.416041    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.416041    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:28.419541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:28.447349    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.447349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:28.451684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:28.479275    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.479307    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:28.483095    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:28.511115    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.511187    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:28.511187    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:28.511237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.574706    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:28.574706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:28.615541    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:28.615541    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:28.709604    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:28.709604    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:28.709604    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:28.738815    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:28.738815    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.300476    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:31.328202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:31.357921    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.357958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:31.361905    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:31.390844    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.390926    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:31.395488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:31.426488    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.426570    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:31.430048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:31.461632    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.461687    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:31.465105    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:31.492594    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.492657    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:31.496042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:31.523806    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.523834    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:31.527758    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:31.557959    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.558020    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:31.561776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:31.588451    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.588485    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:31.588513    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:31.588535    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:31.675984    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:31.675984    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:31.675984    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:31.706483    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:31.706567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.753154    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:31.753677    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:31.813379    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:31.813379    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.359731    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:34.386737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:34.416273    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.416306    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:34.419220    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:34.452145    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.452661    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:34.456139    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:34.486541    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.486593    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:34.489738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:34.520642    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.520642    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:34.524007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:34.556848    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.556848    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:34.560551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:34.589976    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.589976    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:34.594061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:34.623871    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.623871    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:34.627661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:34.655428    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.655428    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:34.655428    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:34.655428    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.693248    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:34.693248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:34.782095    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:34.782095    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:34.782095    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:34.809243    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:34.809243    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:34.859486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:34.859486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.427533    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:37.454695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:37.485702    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.485702    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:37.489329    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:37.522074    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.522074    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:37.525283    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:37.555534    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.555534    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:37.559473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:37.589923    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.589923    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:37.593340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:37.625230    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.625230    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:37.628764    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:37.658722    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.658722    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:37.661870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:37.693003    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.693003    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:37.696992    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:37.726216    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.726286    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:37.726286    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:37.726333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.791305    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:37.791305    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:37.829600    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:37.829600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:37.920892    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:37.920892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:37.920892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:37.947989    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:37.947989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:40.501988    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:40.527784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:40.563590    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.563590    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:40.567375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:40.598332    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.598332    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:40.602019    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:40.629289    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.629289    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:40.633378    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:40.660574    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.660630    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:40.664275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:40.691063    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.691063    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:40.694694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:40.723611    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.723667    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:40.726975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:40.755155    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.755155    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:40.759134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:40.793723    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.793723    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:40.793723    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:40.793723    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:40.831198    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:40.831198    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:40.925587    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:40.925587    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:40.925587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:40.954081    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:40.954114    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:41.007048    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:41.007096    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:43.582160    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:43.607539    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:43.638277    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.638277    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:43.642375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:43.675099    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.675099    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:43.678089    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:43.706803    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.706803    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:43.713114    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:43.740522    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.740522    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:43.744411    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:43.773724    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.773780    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:43.777763    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:43.803962    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.803962    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:43.807698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:43.839559    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.839559    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:43.843918    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:43.876174    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.876252    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:43.876252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:43.876252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:43.902671    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:05:43.934973    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:43.934973    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 08:05:43.999146    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:43.999146    6576 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:44.032735    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:44.033740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:44.075384    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:44.075384    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:44.157223    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:44.157223    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:44.157223    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:46.691333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:46.717072    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:46.748595    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.748595    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:46.752218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:46.780374    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.780374    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:46.783922    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:46.815066    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.815066    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:46.818942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:46.847510    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.847563    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:46.851012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:46.883362    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.883465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:46.886941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:46.916379    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.916451    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:46.920641    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:46.949114    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.949114    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:46.953549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:46.983164    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.983164    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:46.983164    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:46.983164    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:47.022255    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:47.022255    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:47.111784    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:47.111860    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:47.111860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:47.138559    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:47.138559    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:47.188823    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:47.189346    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:48.147422    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:48.239875    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:48.239875    6576 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:48.242898    6576 out.go:179] * Enabled addons: 
	I1205 08:05:48.245836    6576 addons.go:530] duration metric: took 1m45.1017438s for enable addons: enabled=[]
	I1205 08:05:49.757493    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:49.785573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:49.818757    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.818757    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:49.822359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:49.849919    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.849919    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:49.853892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:49.881451    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.881451    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:49.884508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:49.916549    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.916599    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:49.922025    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:49.955857    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.955857    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:49.959871    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:49.992747    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.992747    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:49.997745    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:50.027985    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.027985    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:50.032696    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:50.066315    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.066315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:50.066315    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:50.066315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:50.162764    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:50.162764    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:50.162764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:50.190807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:50.190807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:50.244357    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:50.244357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:50.306832    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:50.306832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:52.850828    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:52.881404    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:52.914164    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.914164    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:52.919056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:52.946339    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.946339    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:52.950249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:52.977159    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.977159    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:52.981587    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:53.011126    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.011126    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:53.016170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:53.050900    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.050900    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:53.055929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:53.086492    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.086492    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:53.091422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:53.123587    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.123587    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:53.126586    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:53.155525    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.155525    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:53.155525    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:53.155525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:53.220198    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:53.221197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:53.261683    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:53.261683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:53.355432    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:53.355432    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:53.355432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:53.386521    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:53.386521    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:55.947613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:55.973795    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:56.007916    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.007916    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:56.011792    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:56.045094    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.045094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:56.048513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:56.082501    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.082501    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:56.086603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:56.116918    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.117005    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:56.120916    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:56.150716    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.150716    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:56.154101    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:56.186882    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.186882    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:56.190500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:56.223741    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.223741    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:56.227290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:56.255902    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.255902    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:56.255902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:56.255902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:56.285180    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:56.285180    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:56.333650    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:56.333650    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:56.393332    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:56.393332    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:56.432841    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:56.432841    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:56.521419    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.025923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:59.056473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:59.091893    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.091909    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:59.095650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:59.128079    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.128185    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:59.131611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:59.159655    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.159655    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:59.163348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:59.192422    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.192422    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:59.196339    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:59.226737    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.226737    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:59.230776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:59.258194    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.258194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:59.261784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:59.292592    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.292592    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:59.296370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:59.323764    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.323764    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:59.323764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:59.323764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:59.375689    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:59.376207    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:59.440586    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:59.440586    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:59.479856    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:59.479856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:59.578161    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.578161    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:59.578161    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.111153    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:02.137611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:02.172231    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.172231    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:02.176271    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:02.208274    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.208274    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:02.211990    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:02.244184    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.244245    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:02.247661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:02.278388    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.278388    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:02.282228    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:02.312290    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.312290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:02.316470    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:02.345487    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.345487    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:02.349444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:02.378305    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.378305    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:02.381923    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:02.409737    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.409737    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:02.409737    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:02.409737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:02.477029    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:02.477029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:02.517422    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:02.517422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:02.605249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:02.605249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:02.605249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.632767    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:02.632828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:05.196182    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:05.221488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:05.251281    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.251355    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:05.254854    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:05.284103    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.284103    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:05.288076    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:05.315552    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.315552    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:05.319409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:05.347664    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.347664    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:05.351387    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:05.382685    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.382685    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:05.386801    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:05.416816    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.416816    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:05.421471    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:05.451265    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.451350    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:05.455129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:05.486455    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.486455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:05.486455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:05.486455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:05.548252    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:05.548252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:05.586103    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:05.586103    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:05.689902    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:05.689902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:05.689902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:05.715463    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:05.715463    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:08.298546    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:08.325694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:08.358357    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.358427    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:08.362535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:08.393631    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.393631    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:08.397365    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:08.429162    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.429162    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:08.433444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:08.464672    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.464672    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:08.467810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:08.496450    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.496450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:08.499640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:08.526246    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.526246    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:08.530507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:08.558130    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.558130    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:08.561856    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:08.590753    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.590753    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:08.590753    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:08.590753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:08.656049    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:08.656049    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:08.697268    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:08.697268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:08.794510    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:08.794510    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:08.794510    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:08.839662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:08.839734    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:11.394677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:11.423727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:11.453346    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.453346    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:11.460955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:11.498834    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.498834    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:11.498834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:11.532657    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.532657    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:11.540987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:11.575759    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.575786    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:11.579561    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:11.612047    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.612102    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:11.615579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:11.644318    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.644370    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:11.648326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:11.678026    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.678026    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:11.681899    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:11.711631    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.711631    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:11.711631    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:11.711631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:11.772905    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:11.772905    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:11.814639    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:11.814639    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:11.905607    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:11.905657    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:11.905700    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:11.934717    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:11.935238    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:14.488836    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:14.512857    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:14.546571    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.546571    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:14.549903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:14.580887    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.580887    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:14.584967    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:14.630312    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.630312    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:14.633809    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:14.667373    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.667373    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:14.671026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:14.699813    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.699813    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:14.703177    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:14.734619    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.734619    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:14.739056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:14.769129    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.769129    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:14.773030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:14.803689    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.803689    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:14.803689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:14.803689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:14.841923    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:14.841923    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:14.932570    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:14.932570    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:14.932570    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:14.961067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:14.961591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:15.010912    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:15.010953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:17.575458    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:17.603741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:17.636367    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.636367    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:17.640529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:17.668380    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.668380    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:17.672111    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:17.700544    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.700544    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:17.704634    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:17.736823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.736823    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:17.741002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:17.770125    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.770125    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:17.775816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:17.812823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.812823    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:17.815683    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:17.844895    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.844895    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:17.849115    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:17.880706    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.880706    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:17.880706    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:17.880706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:17.969171    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:17.969171    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:17.969263    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:17.995396    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:17.995396    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:18.044466    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:18.044466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:18.105721    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:18.105721    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:20.651671    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:20.679273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:20.707727    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.707727    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:20.711373    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:20.741891    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.741891    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:20.746073    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:20.777260    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.777260    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:20.780520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:20.816982    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.816982    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:20.820520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:20.850461    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.850461    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:20.854205    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:20.882429    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.882429    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:20.886920    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:20.914179    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.914179    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:20.917831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:20.949708    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.949708    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:20.949708    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:20.949708    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:21.013967    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:21.013967    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:21.053946    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:21.053946    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:21.140482    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:21.141002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:21.141002    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:21.170239    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:21.170239    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:23.729627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:23.758686    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:23.791537    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.791594    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:23.796131    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:23.827894    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.827894    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:23.832419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:23.862718    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.862718    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:23.867837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:23.896272    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.896272    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:23.900193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:23.929016    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.929078    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:23.932778    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:23.962372    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.962447    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:23.966147    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:23.998472    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.998472    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:24.004351    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:24.033564    6576 logs.go:282] 0 containers: []
	W1205 08:06:24.033564    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:24.033564    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:24.033564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:24.099505    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:24.099505    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:24.139900    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:24.139900    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:24.233474    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:24.233474    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:24.233474    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:24.263408    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:24.263408    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:26.816321    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:26.841457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:26.872936    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.872992    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:26.876345    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:26.908512    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.908580    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:26.912736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:26.944068    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.944068    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:26.947603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:26.975323    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.975360    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:26.978941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:27.008708    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.008751    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:27.012371    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:27.044160    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.044225    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:27.047780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:27.078172    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.078172    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:27.081803    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:27.111287    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.111370    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:27.111370    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:27.111435    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:27.161265    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:27.161329    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:27.221473    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:27.221473    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:27.263907    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:27.263907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:27.357876    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:27.357876    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:27.357876    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:29.890252    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:29.916690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:29.946274    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.946274    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:29.950679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:29.979149    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.979149    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:29.982229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:30.010085    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.010085    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:30.014016    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:30.043254    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.043254    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:30.048048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:30.080613    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.080613    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:30.084300    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:30.114627    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.114627    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:30.118584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:30.147947    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.148009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:30.151166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:30.180743    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.180828    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:30.180828    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:30.180828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:30.244646    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:30.244646    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:30.286079    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:30.286079    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:30.376557    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:30.376557    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:30.376557    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:30.405737    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:30.405737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:32.958550    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:32.987728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:33.018308    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.018370    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:33.022062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:33.052435    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.052435    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:33.056434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:33.085355    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.085426    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:33.089343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:33.121676    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.121737    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:33.125504    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:33.157765    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.157765    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:33.161892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:33.191061    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.191061    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:33.194930    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:33.223173    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.223173    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:33.226650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:33.257481    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.257481    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:33.257481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:33.257481    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:33.301467    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:33.301467    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:33.389528    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:33.389528    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:33.389528    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:33.418631    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:33.418631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:33.465106    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:33.465185    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.034296    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:36.063459    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:36.095210    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.095210    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:36.098565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:36.127708    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.127786    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:36.131615    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:36.159964    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.159964    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:36.163771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:36.192604    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.192604    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:36.196679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:36.224877    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.224958    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:36.228553    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:36.258280    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.258280    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:36.261911    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:36.294140    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.294140    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:36.298273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:36.329657    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.329657    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:36.329657    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:36.329657    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:36.387784    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:36.387784    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.452385    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:36.452385    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:36.493394    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:36.493394    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:36.591485    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:36.591485    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:36.591567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.124474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:39.152578    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:39.183392    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.183392    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:39.187028    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:39.216193    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.216193    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:39.219743    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:39.251680    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.251759    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:39.255869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:39.283843    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.283843    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:39.287237    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:39.316021    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.316021    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:39.319015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:39.349194    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.349194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:39.352951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:39.403729    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.403729    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:39.411012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:39.442909    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.442909    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:39.442909    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:39.442909    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:39.509174    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:39.509174    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:39.550483    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:39.550483    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:39.650354    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:39.650354    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:39.650354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.676786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:39.676786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.228069    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:42.258786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:42.290791    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.290791    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:42.294739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:42.326094    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.326094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:42.329725    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:42.356052    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.356052    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:42.359752    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:42.390464    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.390464    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:42.393935    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:42.421882    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.421882    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:42.426609    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:42.457036    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.457036    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:42.460988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:42.486064    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.486064    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:42.491250    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:42.521748    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.521748    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:42.521748    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:42.521748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:42.551195    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:42.552197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.613626    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:42.613683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:42.678856    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:42.679856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:42.719297    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:42.719297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:42.811034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:45.316640    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:45.343574    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:45.372899    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.372899    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:45.376229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:45.408264    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.408264    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:45.412119    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:45.440697    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.440697    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:45.444501    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:45.471692    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.471727    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:45.475496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:45.508400    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.508450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:45.512541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:45.544177    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.544233    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:45.548858    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:45.579165    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.579165    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:45.582164    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:45.623052    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.623052    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:45.623052    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:45.623052    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:45.651554    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:45.651554    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:45.701716    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:45.701768    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:45.766248    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:45.766248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:45.806341    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:45.806341    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:45.895675    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.401571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:48.432481    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:48.466418    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.466418    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:48.471424    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:48.503617    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.503617    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:48.507677    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:48.541480    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.541480    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:48.547529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:48.579177    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.579177    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:48.585087    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:48.626465    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.626465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:48.630533    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:48.660304    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.660304    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:48.663999    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:48.694957    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.694957    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:48.699665    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:48.725908    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.725908    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:48.725908    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:48.725908    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:48.817395    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.817466    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:48.817466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:48.848226    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:48.848739    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:48.900060    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:48.900060    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:48.962797    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:48.962797    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.508647    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:51.536278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:51.573226    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.573323    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:51.578061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:51.614603    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.614603    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:51.619576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:51.647095    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.647095    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:51.652535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:51.680320    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.680369    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:51.684269    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:51.717798    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.717827    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:51.721877    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:51.750482    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.750482    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:51.754602    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:51.786216    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.786216    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:51.790834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:51.819030    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.819030    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:51.819030    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:51.819030    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:51.876069    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:51.876110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:51.938469    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:51.938469    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.980953    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:51.980953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:52.079938    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:52.079938    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:52.079938    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:54.616891    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:54.642146    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:54.675691    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.675691    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:54.679440    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:54.709522    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.709522    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:54.713343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:54.744053    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.744112    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:54.748148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:54.782163    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.782232    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:54.786128    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:54.817067    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.817067    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:54.820867    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:54.850003    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.850003    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:54.854439    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:54.882517    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.882566    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:54.886475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:54.917057    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.917057    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:54.917057    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:54.917057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:54.982333    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:54.982333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:55.023534    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:55.023534    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:55.136747    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:55.136823    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:55.136823    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:55.169237    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:55.169237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:57.723958    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:57.750382    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:57.784932    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.784932    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:57.788837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:57.815350    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.815350    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:57.819773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:57.850513    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.850513    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:57.854585    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:57.885405    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.885405    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:57.889340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:57.917143    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.917143    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:57.921061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:57.947843    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.947843    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:57.951577    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:57.983169    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.983169    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:57.986925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:58.016381    6576 logs.go:282] 0 containers: []
	W1205 08:06:58.016381    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:58.016381    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:58.016381    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:58.081766    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:58.081766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:58.122021    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:58.122021    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:58.216654    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:58.216654    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:58.216654    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:58.245369    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:58.245369    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:00.814255    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:00.841335    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:00.870336    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.870336    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:00.874294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:00.905321    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.905321    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:00.908814    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:00.940896    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.940896    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:00.944651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:00.975783    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.975855    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:00.979485    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:01.007166    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.007166    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:01.011052    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:01.038708    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.038708    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:01.043766    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:01.072944    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.072944    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:01.076562    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:01.104574    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.104623    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:01.104665    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:01.104665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:01.169748    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:01.169748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:01.210259    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:01.210259    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:01.310310    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:01.310310    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:01.310310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:01.336589    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:01.336589    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:03.889510    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:03.919078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:03.953291    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.953291    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:03.956276    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:03.986975    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.986975    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:03.991157    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:04.022935    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.022935    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:04.026117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:04.058273    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.058312    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:04.061868    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:04.093136    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.093136    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:04.096666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:04.122322    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.122349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:04.126167    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:04.158513    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.158545    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:04.161969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:04.190492    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.190569    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:04.190569    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:04.190569    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:04.259062    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:04.259062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:04.299558    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:04.299558    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:04.393556    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:04.393644    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:04.393644    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:04.420122    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:04.420122    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:06.976110    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:07.001980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:07.033975    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.033975    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:07.040090    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:07.069823    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.069823    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:07.074015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:07.103072    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.103072    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:07.107448    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:07.138770    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.138770    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:07.142987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:07.174660    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.174660    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:07.178913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:07.209719    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.209719    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:07.215472    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:07.243539    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.243539    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:07.248737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:07.279448    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.279448    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:07.279448    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:07.279448    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:07.345481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:07.346489    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:07.384275    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:07.384275    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:07.479588    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:07.479588    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:07.479588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:07.506786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:07.506786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:10.078099    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:10.103951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:10.139034    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.139034    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:10.142691    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:10.174629    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.174629    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:10.178323    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:10.206817    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.206817    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:10.210968    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:10.239729    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.239820    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:10.245043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:10.277712    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.277712    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:10.283741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:10.315362    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.315362    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:10.318268    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:10.346693    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.346693    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:10.350670    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:10.379081    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.379081    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:10.379081    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:10.379081    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:10.443299    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:10.443299    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:10.482497    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:10.482497    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:10.567024    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:10.567024    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:10.567024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:10.596635    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:10.596635    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:13.157670    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:13.186965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:13.222698    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.222730    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:13.226690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:13.261914    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.261957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:13.265780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:13.294590    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.294590    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:13.299066    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:13.329216    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.329216    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:13.334474    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:13.366263    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.366290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:13.369870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:13.398379    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.398379    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:13.402396    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:13.430465    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.430465    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:13.434253    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:13.462873    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.462905    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:13.462905    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:13.462949    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:13.525954    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:13.526955    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:13.566284    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:13.567284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:13.656971    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:13.656971    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:13.656971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:13.684284    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:13.684284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.241440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:16.268513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:16.302653    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.302653    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:16.306429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:16.337387    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.337387    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:16.342004    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:16.371449    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.371449    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:16.376376    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:16.406912    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.406912    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:16.410777    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:16.438875    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.438875    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:16.442983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:16.470299    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.470299    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:16.474336    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:16.504067    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.504067    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:16.508174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:16.536869    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.536869    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:16.536869    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:16.536869    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:16.624673    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:16.624703    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:16.624755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:16.653894    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:16.653894    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.701985    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:16.701985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:16.763148    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:16.763148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.307232    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:19.334513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:19.371034    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.371140    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:19.375038    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:19.403110    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.403186    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:19.407168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:19.435904    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.435904    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:19.440294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:19.470700    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.470700    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:19.474611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:19.502846    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.502915    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:19.506400    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:19.540483    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.540483    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:19.544695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:19.576470    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.576501    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:19.579834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:19.609587    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.609587    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:19.609587    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:19.609587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.653000    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:19.653000    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:19.747787    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:19.747787    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:19.747787    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:19.774804    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:19.774804    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:19.825222    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:19.825338    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.394074    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:22.419163    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:22.454202    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.454202    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:22.457716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:22.487462    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.487615    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:22.491427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:22.522398    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.522398    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:22.526148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:22.554536    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.554536    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:22.558447    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:22.590329    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.590401    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:22.595088    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:22.626553    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.626553    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:22.630372    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:22.658911    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.658911    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:22.662715    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:22.692369    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.692444    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:22.692468    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:22.692468    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.759391    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:22.759391    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:22.801415    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:22.801415    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:22.891643    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:22.891710    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:22.891738    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:22.922662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:22.922662    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:25.480645    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:25.506403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:25.536534    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.536600    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:25.540233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:25.568373    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.568373    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:25.572581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:25.604196    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.604196    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:25.608476    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:25.639923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.640007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:25.643813    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:25.673923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.673923    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:25.677542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:25.709156    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.709156    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:25.712910    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:25.744371    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.744371    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:25.750463    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:25.778113    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.778113    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:25.778113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:25.778113    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:25.842953    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:25.842953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:25.881310    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:25.881310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:25.976920    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:25.976920    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:25.976920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:26.005828    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:26.005889    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:28.568522    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:28.594981    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:28.628025    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.628025    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:28.631569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:28.661047    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.661047    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:28.664662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:28.692667    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.692667    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:28.696624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:28.725878    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.725944    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:28.730056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:28.758073    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.758129    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:28.761794    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:28.788812    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.788812    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:28.793030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:28.839778    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.839778    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:28.843937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:28.873288    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.873288    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:28.873288    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:28.873288    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:28.937414    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:28.937414    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:28.975610    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:28.975610    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:29.110286    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:29.110286    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:29.110286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:29.140120    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:29.140120    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:31.695315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:31.723717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:31.755093    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.755155    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:31.758672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:31.786260    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.786260    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:31.790917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:31.817450    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.817450    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:31.822438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:31.852769    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.852788    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:31.856218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:31.885715    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.885715    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:31.890036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:31.919240    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.919240    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:31.924888    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:31.956860    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.956860    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:31.960848    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:31.989055    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.989055    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:31.989055    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:31.989055    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:32.055751    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:32.055751    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:32.091848    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:32.091848    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:32.183494    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:32.183494    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:32.183494    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:32.211020    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:32.211056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:34.770702    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:34.796134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:34.830020    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.830052    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:34.833506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:34.860829    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.860829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:34.864718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:34.895302    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.895302    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:34.899305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:34.928933    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.928933    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:34.935599    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:34.964256    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.964280    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:34.967945    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:34.995571    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.995571    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:35.001155    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:35.038603    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.038603    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:35.042249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:35.075025    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.075025    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:35.075025    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:35.075025    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:35.136020    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:35.136020    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:35.198233    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:35.198233    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:35.236713    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:35.236713    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:35.327635    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:35.327659    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:35.327659    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:37.859618    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:37.890074    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:37.922724    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.922724    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:37.926571    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:37.959720    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.959720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:37.963770    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:37.991602    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.991602    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:37.995673    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:38.023771    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.023771    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:38.030170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:38.061676    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.061676    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:38.065660    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:38.116492    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.116542    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:38.122475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:38.151483    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.151483    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:38.155624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:38.184512    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.184512    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:38.184512    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:38.184512    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:38.221972    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:38.221972    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:38.315283    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:38.315283    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:38.315283    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:38.342209    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:38.342209    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:38.391392    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:38.391470    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:40.955418    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:40.982062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:41.015938    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.015938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:41.019996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:41.049917    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.049917    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:41.052925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:41.084946    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.084946    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:41.088068    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:41.120218    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.120297    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:41.123688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:41.152948    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.152948    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:41.156508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:41.183795    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.183795    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:41.187681    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:41.217097    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.217097    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:41.221130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:41.252354    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.252354    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:41.252354    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:41.252354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:41.345903    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:41.345903    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:41.345903    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:41.373149    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:41.373149    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:41.423553    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:41.423553    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:41.485144    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:41.485144    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.029139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:44.056384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:44.087995    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.088078    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:44.091865    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:44.118934    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.118934    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:44.122494    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:44.150822    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.150864    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:44.154454    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:44.183401    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.183401    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:44.187086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:44.214588    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.214644    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:44.217896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:44.249548    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.249548    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:44.253290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:44.281230    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.281230    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:44.284996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:44.314362    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.314426    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:44.314426    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:44.314426    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:44.378166    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:44.378166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.420024    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:44.420024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:44.510942    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:44.510942    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:44.510942    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:44.539432    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:44.539482    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:47.095962    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:47.121976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:47.155042    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.155042    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:47.159040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:47.188768    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.188768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:47.192847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:47.220500    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.220500    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:47.224299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:47.252483    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.252483    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:47.256264    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:47.285852    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.285852    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:47.290573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:47.319383    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.319450    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:47.323007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:47.353203    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.353203    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:47.357241    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:47.385498    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.385498    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:47.385498    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:47.385498    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:47.449686    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:47.449686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:47.490407    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:47.490407    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:47.577868    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:47.577868    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:47.577868    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:47.604652    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:47.604652    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:50.157279    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:50.184328    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:50.218852    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.218852    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:50.222438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:50.250551    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.250571    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:50.254169    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:50.285371    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.285424    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:50.289741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:50.320093    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.320093    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:50.323845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:50.357038    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.357084    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:50.360291    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:50.389753    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.389829    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:50.392859    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:50.423710    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.423710    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:50.427343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:50.454456    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.454456    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:50.454456    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:50.454456    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:50.516581    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:50.516581    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:50.555412    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:50.555412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:50.648402    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:50.648402    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:50.648402    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:50.673701    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:50.673701    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:53.230542    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:53.256707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:53.290781    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.290781    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:53.294254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:53.326261    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.326261    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:53.329838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:53.359630    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.359630    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:53.364896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:53.396046    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.396046    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:53.400120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:53.428713    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.428713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:53.432409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:53.462479    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.462479    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:53.467583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:53.495306    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.495306    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:53.499565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:53.530622    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.530622    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:53.530622    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:53.530622    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:53.593183    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:53.593183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:53.633807    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:53.633807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:53.721016    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:53.721016    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:53.721016    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:53.748333    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:53.748442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.315862    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:56.341452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:56.374032    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.374063    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:56.377843    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:56.408635    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.408698    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:56.412330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:56.442083    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.442083    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:56.445380    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:56.473679    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.473749    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:56.477263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:56.506107    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.506156    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:56.510975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:56.538958    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.539022    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:56.542581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:56.572303    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.572303    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:56.576375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:56.604073    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.604073    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:56.604073    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:56.604145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:56.641552    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:56.641552    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:56.734944    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:56.735002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:56.735046    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:56.770367    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:56.770412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.826378    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:56.826378    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.393300    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:59.417617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:59.452220    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.452220    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:59.456092    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:59.484787    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.484787    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:59.488348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:59.516670    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.516670    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:59.521214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:59.548048    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.548048    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:59.551862    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:59.576869    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.576869    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:59.581825    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:59.610579    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.610579    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:59.614523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:59.642507    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.642507    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:59.646397    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:59.675062    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.675062    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:59.675062    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:59.675062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.739704    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:59.739704    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:59.782363    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:59.782363    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:59.876076    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:59.876076    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:59.876076    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:59.903005    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:59.903005    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:02.456978    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:02.483895    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:02.516374    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.516374    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:02.520443    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:02.553066    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.553148    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:02.556844    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:02.585220    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.585220    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:02.589183    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:02.620655    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.620655    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:02.625389    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:02.659292    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.659369    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:02.662727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:02.690972    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.690972    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:02.694944    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:02.723751    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.723797    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:02.727357    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:02.764750    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.764750    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:02.764750    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:02.764750    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:02.834733    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:02.834733    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:02.873432    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:02.873432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:02.963503    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:02.963503    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:02.963503    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:02.992067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:02.992067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:05.547340    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:05.572946    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:05.605473    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.605473    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:05.609479    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:05.639072    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.639072    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:05.642702    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:05.674126    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.674174    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:05.678318    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:05.710378    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.710378    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:05.713988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:05.743263    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.743263    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:05.748802    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:05.777467    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.777467    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:05.781993    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:05.816147    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.816147    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:05.820044    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:05.849173    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.849173    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:05.849173    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:05.849173    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:05.937771    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:05.937771    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:05.937771    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:05.965110    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:05.965110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:06.012927    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:06.012927    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:06.076287    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:06.076287    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:08.621402    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:08.647297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:08.678598    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.678679    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:08.681866    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:08.710779    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.710856    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:08.714554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:08.745379    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.745379    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:08.750135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:08.785796    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.785840    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:08.791900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:08.823728    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.823778    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:08.827659    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:08.858652    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.858726    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:08.862304    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:08.893238    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.893287    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:08.896783    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:08.927578    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.927578    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:08.927578    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:08.927578    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:08.990752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:08.990752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:09.030509    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:09.030509    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:09.116112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:09.116629    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:09.116629    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:09.148307    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:09.148307    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:11.720341    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:11.750190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:11.784223    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.784247    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:11.789837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:11.819184    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.819184    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:11.824438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:11.852058    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.852058    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:11.857984    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:11.888391    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.888391    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:11.891707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:11.921973    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.921973    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:11.925426    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:11.953845    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.953845    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:11.957863    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:11.987150    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.987236    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:11.990921    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:12.018843    6576 logs.go:282] 0 containers: []
	W1205 08:08:12.018895    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:12.018895    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:12.018918    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:12.048523    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:12.048523    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:12.099490    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:12.099490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:12.163368    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:12.163368    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:12.204867    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:12.204867    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:12.290894    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:14.795945    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:14.821749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:14.851399    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.851399    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:14.855010    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:14.887370    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.887370    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:14.891117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:14.922139    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.922139    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:14.926245    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:14.954095    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.954095    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:14.959551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:14.987564    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.987564    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:14.991080    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:15.023941    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.023941    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:15.027344    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:15.056411    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.056474    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:15.059417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:15.092400    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.092400    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:15.092400    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:15.092400    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:15.119932    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:15.119932    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:15.169067    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:15.169067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:15.232603    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:15.232603    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:15.276106    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:15.276106    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:15.363421    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:17.870108    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:17.895889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:17.927528    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.927528    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:17.931166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:17.959105    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.959105    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:17.962846    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:17.994011    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.994011    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:17.998047    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:18.026606    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.026677    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:18.030234    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:18.061389    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.061389    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:18.065290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:18.096454    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.096454    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:18.100320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:18.129213    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.129213    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:18.133040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:18.160088    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.160111    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:18.160111    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:18.160111    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:18.221228    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:18.221228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:18.258886    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:18.258886    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:18.348416    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:18.348496    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:18.348525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:18.379855    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:18.379855    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:20.936239    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:20.959002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:20.990013    6576 logs.go:282] 0 containers: []
	W1205 08:08:20.990085    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:20.993773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:21.021884    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.021925    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:21.025964    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:21.054531    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.054531    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:21.058277    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:21.088997    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.089078    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:21.092631    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:21.121326    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.121360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:21.125135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:21.160429    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.160496    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:21.164226    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:21.192488    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.192557    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:21.196294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:21.228406    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.228445    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:21.228445    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:21.228495    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:21.291604    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:21.292600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:21.331218    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:21.331218    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:21.412454    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:21.412454    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:21.412454    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:21.441164    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:21.441229    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:23.994395    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:24.020275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:24.054682    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.054682    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:24.058674    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:24.089654    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.089654    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:24.093569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:24.123224    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.123224    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:24.127942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:24.155350    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.155350    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:24.159192    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:24.192652    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.192652    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:24.197194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:24.229851    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.229851    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:24.233957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:24.262158    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.262158    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:24.266478    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:24.297683    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.297766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:24.297766    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:24.297766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:24.388464    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:24.388464    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:24.388464    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:24.416764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:24.416764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:24.468678    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:24.469203    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:24.532678    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:24.532678    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.075175    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:27.104797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:27.137440    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.137440    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:27.141581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:27.171103    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.171126    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:27.174625    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:27.205068    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.205102    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:27.208711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:27.237765    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.237806    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:27.241719    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:27.269838    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.269838    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:27.273353    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:27.300835    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.300835    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:27.304633    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:27.333062    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.333062    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:27.338523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:27.366572    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.366572    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:27.366572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:27.366572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.402514    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:27.402514    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:27.499452    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:27.499452    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:27.499452    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:27.528089    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:27.528089    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:27.596881    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:27.596881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.168154    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:30.194986    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:30.228709    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.228709    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:30.233961    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:30.268256    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.268256    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:30.271667    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:30.300456    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.300519    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:30.303870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:30.335955    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.335955    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:30.339590    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:30.367829    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.367829    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:30.373123    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:30.401294    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.401327    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:30.404974    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:30.436526    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.436526    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:30.440246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:30.478544    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.478599    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:30.478599    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:30.478651    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.544716    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:30.544716    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:30.584496    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:30.584496    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:30.671308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:30.671352    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:30.671352    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:30.699029    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:30.699029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:33.251744    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:33.280500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:33.311912    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.311912    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:33.316407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:33.347966    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.347966    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:33.351341    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:33.386249    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.386249    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:33.389828    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:33.420571    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.420571    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:33.423584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:33.450599    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.450599    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:33.453949    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:33.488480    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.488480    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:33.492797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:33.523382    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.523382    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:33.526929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:33.561860    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.561860    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:33.561860    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:33.561860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:33.628425    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:33.628425    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:33.666453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:33.666453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:33.756872    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:33.756872    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:33.756872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:33.785780    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:33.785780    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:36.342322    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:36.368238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:36.399529    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.399529    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:36.402710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:36.430561    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.430561    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:36.434233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:36.461894    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.461894    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:36.466270    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:36.492354    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.492354    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:36.495668    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:36.526818    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.526818    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:36.530606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:36.564752    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.564752    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:36.569130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:36.598403    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.598403    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:36.603579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:36.635757    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.635757    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:36.635757    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:36.635757    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:36.702715    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:36.702715    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:36.740740    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:36.740740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:36.827779    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:36.827779    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:36.827779    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:36.855113    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:36.855148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.404078    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:39.428626    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:39.461540    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.461540    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:39.465369    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:39.497259    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.497368    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:39.501168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:39.532526    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.532526    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:39.537388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:39.570114    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.570114    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:39.574332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:39.607392    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.607392    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:39.611100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:39.640933    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.640933    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:39.644381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:39.673224    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.673224    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:39.678235    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:39.706766    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.706766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:39.706766    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:39.706766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:39.734527    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:39.734527    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.787138    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:39.787138    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:39.849637    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:39.849637    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:39.889331    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:39.889331    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:39.977390    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:42.481792    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:42.508550    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:42.541632    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.541632    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:42.545635    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:42.595829    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.595829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:42.601196    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:42.630888    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.630888    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:42.634929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:42.665451    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.665451    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:42.668581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:42.701244    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.701244    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:42.705368    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:42.737250    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.737250    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:42.740441    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:42.766622    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.766700    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:42.770278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:42.801486    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.801486    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:42.801486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:42.801486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:42.866794    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:42.866930    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:42.906819    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:42.906819    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:43.000226    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:43.000226    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:43.000226    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:43.027011    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:43.027011    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:45.586794    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:45.615024    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:45.642666    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.642666    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:45.646348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:45.675867    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.675867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:45.679650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:45.711785    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.711785    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:45.717449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:45.750065    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.750109    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:45.753406    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:45.782908    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.782908    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:45.786362    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:45.816309    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.816309    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:45.819889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:45.847629    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.847656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:45.850622    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:45.880676    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.880733    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:45.880759    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:45.880759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:45.943843    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:45.943843    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:45.984212    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:45.984212    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:46.071821    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:46.071821    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:46.071821    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:46.098280    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:46.098280    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:48.651285    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:48.676952    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:48.706696    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.706696    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:48.710427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:48.738766    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.738766    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:48.746145    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:48.773486    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.773486    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:48.778542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:48.805908    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.805908    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:48.809817    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:48.840360    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.840360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:48.843723    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:48.871560    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.871560    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:48.875316    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:48.903556    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.903556    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:48.908924    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:48.938455    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.938455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:48.938455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:48.938455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:49.001951    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:49.001951    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:49.042098    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:49.042098    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:49.131350    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:49.131350    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:49.131350    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:49.166759    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:49.166759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:51.724851    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:51.752650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:51.780528    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.780542    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:51.784422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:51.816577    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.816577    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:51.819989    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:51.849244    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.849244    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:51.853211    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:51.881159    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.881222    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:51.884831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:51.917237    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.917237    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:51.921202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:51.951018    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.951018    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:51.955222    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:51.982262    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.982262    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:51.986170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:52.013482    6576 logs.go:282] 0 containers: []
	W1205 08:08:52.013526    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:52.013564    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:52.013564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:52.050334    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:52.050334    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:52.144178    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:52.144178    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:52.144178    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:52.171135    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:52.171135    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:52.223993    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:52.223993    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:54.792613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:54.817042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:54.848768    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.848768    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:54.852580    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:54.881045    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.881045    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:54.885194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:54.915368    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.915368    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:54.919753    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:54.952592    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.952679    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:54.956477    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:54.989304    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.989357    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:54.992976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:55.025855    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.025855    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:55.029407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:55.059218    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.059290    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:55.063529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:55.092992    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.092992    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:55.092992    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:55.092992    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:55.201249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:55.201249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:55.201249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:55.228877    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:55.228907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:55.286872    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:55.286872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:55.357844    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:55.357844    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:57.912434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:57.938621    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:57.968927    6576 logs.go:282] 0 containers: []
	W1205 08:08:57.968927    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:57.975548    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:58.003200    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.003200    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:58.006983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:58.037886    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.037886    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:58.041594    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:58.072037    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.072037    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:58.076711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:58.118201    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.118201    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:58.122059    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:58.150468    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.150468    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:58.154554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:58.186009    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.186009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:58.189676    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:58.219204    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.219204    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:58.219204    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:58.219204    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:58.283572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:58.283572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:58.322291    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:58.322291    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:58.406023    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:58.406023    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:58.406023    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:58.434361    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:58.434881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:00.986031    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:01.012520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:01.041860    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.041860    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:01.045736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:01.074168    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.074168    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:01.081136    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:01.115160    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.115160    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:01.121214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:01.152200    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.152200    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:01.155786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:01.187849    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.187849    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:01.193651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:01.220927    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.220927    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:01.225251    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:01.262648    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.262648    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:01.266549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:01.298388    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.298388    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:01.298459    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:01.298491    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:01.389098    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:01.389126    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:01.389126    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:01.418232    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:01.418232    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:01.463083    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:01.463083    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:01.528159    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:01.528159    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.078505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:04.106462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:04.136412    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.136412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:04.139845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:04.168393    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.168465    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:04.171965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:04.203281    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.203281    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:04.207129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:04.235244    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.235244    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:04.239720    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:04.271746    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.271746    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:04.279903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:04.308486    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.308486    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:04.312482    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:04.341988    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.341988    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:04.345122    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:04.378152    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.378152    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:04.378152    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:04.378152    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:04.443403    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:04.443403    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.484661    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:04.484661    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:04.574793    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:04.574793    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:04.574793    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:04.606357    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:04.606357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.162554    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:07.194738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:07.227905    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.227977    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:07.232048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:07.262861    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.262861    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:07.266595    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:07.297184    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.297184    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:07.300873    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:07.331523    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.331523    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:07.335838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:07.367893    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.367893    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:07.371282    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:07.400934    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.400934    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:07.403928    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:07.431616    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.431616    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:07.435314    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:07.469043    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.469043    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:07.469043    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:07.469043    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:07.497832    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:07.497832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.547846    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:07.547846    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:07.611682    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:07.611682    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:07.651105    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:07.651105    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:07.741756    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.247138    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:10.275755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:10.311911    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.311911    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:10.317436    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:10.347243    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.347243    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:10.353296    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:10.384412    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.384412    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:10.389236    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:10.419505    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.419505    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:10.423688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:10.451213    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.451213    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:10.457390    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:10.485001    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.485001    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:10.488370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:10.519268    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.519268    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:10.524029    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:10.551544    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.551544    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:10.551544    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:10.551544    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:10.618971    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:10.618971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:10.657753    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:10.657753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:10.751422    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.751422    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:10.751422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:10.777901    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:10.778003    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.340867    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:13.373007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:13.404147    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.404191    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:13.408078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:13.440768    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.440768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:13.444748    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:13.474390    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.474390    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:13.478381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:13.508004    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.508057    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:13.511749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:13.543789    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.543789    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:13.547384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:13.576308    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.576377    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:13.579736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:13.609792    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.609792    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:13.613298    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:13.642091    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.642091    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:13.642091    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:13.642091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:13.671624    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:13.671686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.718995    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:13.718995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:13.782056    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:13.782056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:13.821453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:13.821453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:13.928916    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.433905    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:16.459887    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:16.496160    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.496160    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:16.499639    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:16.526877    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.526877    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:16.530750    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:16.560261    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.560261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:16.563991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:16.595914    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.595914    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:16.599869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:16.627694    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.627694    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:16.632403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:16.660769    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.660769    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:16.664194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:16.692707    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.692707    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:16.698036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:16.728749    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.728749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:16.728749    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:16.728749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:16.778953    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:16.779017    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:16.841091    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:16.841091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:16.881145    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:16.881145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:16.969295    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.969332    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:16.969362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:19.502757    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:19.529429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:19.557499    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.557499    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:19.561490    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:19.590127    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.590127    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:19.594042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:19.622382    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.622382    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:19.626026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:19.653513    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.653513    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:19.656672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:19.686153    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.686153    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:19.691297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:19.720831    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.720858    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:19.724786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:19.751107    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.751107    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:19.754979    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:19.782999    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.782999    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:19.782999    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:19.782999    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:19.844801    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:19.844801    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:19.884439    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:19.884439    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:19.977224    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:19.977224    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:19.977224    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:20.007404    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:20.007404    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:22.569427    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:22.596121    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:22.628181    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.628181    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:22.632086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:22.660848    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.660848    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:22.664755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:22.694182    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.694261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:22.698085    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:22.726532    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.726600    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:22.730354    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:22.757319    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.757355    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:22.760937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:22.792791    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.792791    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:22.799388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:22.841372    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.841372    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:22.845285    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:22.879377    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.879377    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:22.879377    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:22.879377    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:22.946156    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:22.946156    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:22.990461    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:22.990461    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:23.119453    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:23.119453    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:23.119453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:23.146199    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:23.147241    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:25.703191    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:25.728570    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:25.758884    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.758884    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:25.765071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:25.792957    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.792957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:25.796556    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:25.825466    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.825466    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:25.828728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:25.857451    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.857521    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:25.861306    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:25.887700    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.887700    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:25.891071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:25.920875    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.920875    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:25.924452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:25.952908    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.952952    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:25.956305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:25.987608    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.987608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:25.987608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:25.987608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:26.027162    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:26.027162    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:26.120245    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:26.120245    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:26.120245    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:26.147670    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:26.147697    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:26.198923    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:26.198963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:28.769076    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:28.797716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:28.829859    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.829898    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:28.833257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:28.864507    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.864507    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:28.868407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:28.898827    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.898827    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:28.902971    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:28.933087    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.933087    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:28.937063    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:28.964140    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.964140    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:28.968403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:28.997620    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.997620    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:29.001779    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:29.035745    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.035745    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:29.038757    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:29.068429    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.068429    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:29.068429    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:29.068429    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:29.124688    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:29.124688    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:29.188675    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:29.188675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:29.227887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:29.227887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:29.312828    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:29.312828    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:29.312828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:31.845911    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:31.878797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:31.916523    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.916523    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:31.919583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:31.950914    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.950976    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:31.954687    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:31.983555    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.983580    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:31.987603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:32.021007    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.021007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:32.025190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:32.056980    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.057033    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:32.060500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:32.104780    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.104780    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:32.108815    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:32.135429    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.135494    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:32.138969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:32.171260    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.171260    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:32.171260    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:32.171260    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:32.237752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:32.237752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:32.277887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:32.277887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:32.365810    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:32.365810    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:32.365810    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:32.392252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:32.392252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:34.943627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:34.969529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:35.010672    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.010672    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:35.015462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:35.048036    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.048036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:35.055991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:35.103005    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.103005    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:35.106890    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:35.137906    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.137906    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:35.141530    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:35.172625    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.172625    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:35.176175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:35.209474    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.209474    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:35.213175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:35.244787    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.244787    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:35.248557    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:35.275127    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.275158    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:35.275158    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:35.275158    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:35.334298    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:35.334298    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:35.373969    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:35.373969    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:35.459656    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:35.459755    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:35.459755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:35.489057    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:35.489057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:38.049404    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:38.073507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:38.101267    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.101337    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:38.104951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:38.134276    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.134276    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:38.139127    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:38.166437    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.166437    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:38.170518    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:38.199145    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.199145    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:38.202760    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:38.230466    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.230466    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:38.233640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:38.263867    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.263867    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:38.267542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:38.297791    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.297791    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:38.301874    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:38.332980    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.332980    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:38.332980    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:38.332980    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:38.396086    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:38.396086    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:38.433018    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:38.433018    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:38.516847    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:38.516847    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:38.516847    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:38.545985    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:38.545985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:41.097758    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:41.125607    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:41.156423    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.156423    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:41.159823    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:41.188324    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.188383    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:41.192299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:41.224751    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.224789    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:41.228655    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:41.257790    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.257790    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:41.261606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:41.292935    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.292999    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:41.296487    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:41.322728    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.322728    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:41.326980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:41.355569    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.355569    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:41.359412    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:41.388228    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.388228    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:41.388228    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:41.388228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:41.454094    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:41.454094    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:41.492536    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:41.492536    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:41.584848    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:41.584892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:41.584892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:41.611807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:41.611807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:44.169483    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:44.196254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:44.224412    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.224412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:44.229628    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:44.257724    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.257724    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:44.262355    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:44.289872    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.289926    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:44.293506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:44.321891    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.321891    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:44.325045    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:44.354424    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.354424    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:44.357980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:44.388960    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.388960    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:44.392224    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:44.424484    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.424484    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:44.427710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:44.458834    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.458834    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:44.458834    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:44.458834    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:44.523336    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:44.523336    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:44.560362    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:44.560362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:44.656711    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:44.656711    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:44.656711    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:44.682009    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:44.683010    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.243380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:47.270606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:47.302678    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.302720    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:47.305835    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:47.334169    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.334213    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:47.338162    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:47.370622    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.370693    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:47.374238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:47.406764    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.406787    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:47.410449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:47.439290    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.439332    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:47.442816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:47.475239    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.475239    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:47.479100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:47.510196    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.510196    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:47.513831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:47.543315    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.543378    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:47.543378    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:47.543411    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:47.577600    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:47.577600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.651517    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:47.651517    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:47.717530    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:47.717530    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:47.757989    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:47.757989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:47.848615    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:50.354473    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:50.381662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:50.410303    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.410303    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:50.416210    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:50.443479    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.443479    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:50.447606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:50.475214    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.475214    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:50.479409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:50.508984    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.508984    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:50.513185    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:50.544532    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.544532    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:50.548200    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:50.578350    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.578350    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:50.583137    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:50.615656    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.615656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:50.619983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:50.649117    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.649117    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:50.649117    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:50.649117    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:50.678837    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:50.678837    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:50.730963    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:50.730963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:50.797442    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:50.797442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:50.839051    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:50.840050    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:50.934073    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.440116    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:53.465957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:53.497390    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.497462    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:53.501077    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:53.529488    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.529488    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:53.536331    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:53.563367    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.563367    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:53.566361    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:53.596894    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.596894    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:53.600611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:53.630623    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.630623    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:53.634434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:53.664123    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.664123    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:53.668403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:53.697948    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.697948    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:53.701419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:53.730378    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.730462    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:53.730462    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:53.730462    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:53.798465    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:53.798465    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:53.841124    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:53.841124    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:53.935344    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.936318    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:53.936318    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:53.965040    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:53.965040    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:56.520907    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:56.551718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:56.584506    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.584506    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:56.588065    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:56.618214    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.618214    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:56.622199    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:56.650798    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.650798    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:56.654367    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:56.685409    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.685440    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:56.688781    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:56.719049    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.719163    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:56.722810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:56.753646    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.753646    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:56.757666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:56.793942    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.793942    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:56.798049    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:56.827315    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.827315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:56.827315    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:56.827315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:56.893213    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:56.893213    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:56.931234    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:56.931234    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:57.020142    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:57.020142    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:57.020142    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:57.048871    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:57.048871    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:59.606004    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:59.632524    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:59.662177    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.662177    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:59.666311    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:59.701152    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.701202    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:59.704398    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:59.733278    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.733278    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:59.738174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:59.769038    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.769038    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:59.773266    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:59.814259    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.814259    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:59.818330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:59.848066    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.848066    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:59.851684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:59.880029    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.880029    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:59.884457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:59.914608    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.914608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:59.914608    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:59.914608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:59.978490    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:59.978490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:00.018881    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:00.018881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:00.109744    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:00.109744    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:00.109744    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:00.137522    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:00.137591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:02.693722    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:02.718495    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:10:02.754864    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.754864    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:10:02.758547    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:10:02.795133    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.795231    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:10:02.798914    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:10:02.828115    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.828115    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:10:02.831263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:10:02.864241    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.864241    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:10:02.867861    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:10:02.895555    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.895555    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:10:02.901617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:10:02.931756    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.931756    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:10:02.935718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:10:02.964034    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.964034    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:10:02.968113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:10:03.000080    6576 logs.go:282] 0 containers: []
	W1205 08:10:03.000080    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:10:03.000080    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:03.000080    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:03.092694    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:03.094183    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:03.094183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:03.124625    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:03.124625    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:03.178920    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:10:03.178920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:10:03.237776    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:10:03.237776    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:05.783793    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:05.810874    6576 out.go:203] 
	W1205 08:10:05.812874    6576 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 08:10:05.812874    6576 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 08:10:05.812874    6576 out.go:285] * Related issues:
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 08:10:05.815880    6576 out.go:203] 
	
	
	==> Docker <==
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014561584Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014638592Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014649493Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014654993Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014662094Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014686897Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014806909Z" level=info msg="Initializing buildkit"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.159292906Z" level=info msg="Completed buildkit initialization"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170523657Z" level=info msg="Daemon has completed initialization"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170725677Z" level=info msg="API listen on [::]:2376"
	Dec 05 08:04:00 newest-cni-042100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170749180Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170751380Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Loaded network plugin cni"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 08:04:01 newest-cni-042100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:09.941425   19688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:09.942771   19688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:09.943937   19688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:09.945011   19688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:09.946290   19688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.912373] CPU: 10 PID: 467231 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f59c4559b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f59c4559af6.
	[  +0.000001] RSP: 002b:00007fff7b401a80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.986945] CPU: 6 PID: 467375 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f68553b7b20
	[  +0.000010] Code: Unable to access opcode bytes at RIP 0x7f68553b7af6.
	[  +0.000001] RSP: 002b:00007ffe7761e510 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 08:10:09 up  3:43,  0 user,  load average: 0.88, 2.23, 3.31
	Linux newest-cni-042100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:10:06 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:07 newest-cni-042100 kubelet[19523]: E1205 08:10:07.302625   19523 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:07 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:08 newest-cni-042100 kubelet[19536]: E1205 08:10:08.061867   19536 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:08 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:08 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:08 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 05 08:10:08 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:08 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:08 newest-cni-042100 kubelet[19564]: E1205 08:10:08.802977   19564 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:08 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:08 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:09 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 05 08:10:09 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:09 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:09 newest-cni-042100 kubelet[19583]: E1205 08:10:09.554492   19583 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:09 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:09 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (606.408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-042100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (384.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 08:04:25.460229    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:04:35.514080    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.111688    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.142551    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.148991    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.161010    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.183290    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.225124    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.306649    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.469161    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:36.791768    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:37.433729    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:38.715785    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:05:03.823147    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:05:03.905630    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:05:17.129623    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:05:22.688331    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:05:26.906964    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:05:58.092437    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:06:29.854379    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:31.977074    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:31.984138    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:31.995679    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:32.018055    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:32.060119    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:32.141480    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:32.303488    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:32.625249    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:06:33.267677    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.261414    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.268693    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.281010    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.303241    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.345391    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.427270    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.549209    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.589071    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:34.910918    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:35.553143    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:36.835256    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:37.111197    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:39.397694    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:42.233277    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:06:44.520477    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:48.829722    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:51.648522    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:06:52.475313    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:06:54.762668    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:07:12.957794    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:07:15.244472    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:07:19.358909    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:07:20.015615    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:07:20.033850    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:07:23.983183    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:07:42.916844    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:07:47.751282    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:07:53.919753    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:07:56.207209    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:08:29.078325    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:08:55.960668    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:55.967498    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:55.979983    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:56.001843    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:56.044110    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:56.125733    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:56.287390    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:56.609160    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:57.251292    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:08:58.533169    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:01.095315    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:09:04.960935    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:05.996149    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.163984    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.171135    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.182563    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.204050    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.217187    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.245737    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.327561    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.489420    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:06.811526    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:07.453649    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:08.736894    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:11.298766    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:09:15.842616    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:16.421371    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:16.458728    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:18.130189    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:09:26.663570    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:32.673941    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:09:36.116634    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:36.146989    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:36.940408    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:09:47.145603    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:09:52.153778    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:10:03.860105    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:10:05.777434    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:11:29.860686    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:11:31.981651    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:11:34.265580    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:11:39.825904    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:11:50.031845    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:11:51.652829    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:11:59.686538    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:12:01.975823    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:12:20.039148    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:12:23.988883    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:12:42.922174    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:12:52.938047    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 2 (602.3055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-104100
helpers_test.go:243: (dbg) docker inspect no-preload-104100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043",
	        "Created": "2025-12-05T07:47:18.090294673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 414493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:58:06.386924979Z",
	            "FinishedAt": "2025-12-05T07:57:57.665009272Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hosts",
	        "LogPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043-json.log",
	        "Name": "/no-preload-104100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-104100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-104100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-104100",
	                "Source": "/var/lib/docker/volumes/no-preload-104100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-104100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-104100",
	                "name.minikube.sigs.k8s.io": "no-preload-104100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db4519a857b1cb5f334b0df06abf490ceaca02f8fd29297b385218566b669e33",
	            "SandboxKey": "/var/run/docker/netns/db4519a857b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61566"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61564"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61565"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-104100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "707b5f83051fc4c181f3506b97f5ea358824531428895a55938badd3159b6c9f",
	                    "EndpointID": "4524197e7adfcc8ed0cbc2de51217f52907988f5d42b7f9fdc11804701eaff4d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-104100",
	                        "5f2a793d7573"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 2 (601.9685ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25: (1.6951945s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-218000 sudo systemctl cat docker --no-pager                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo crio config                                               │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/docker/daemon.json                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo docker system info                                       │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p bridge-218000                                                                │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat cri-docker --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cri-dockerd --version                                    │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status containerd --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat containerd --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /lib/systemd/system/containerd.service               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/containerd/config.toml                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo containerd config dump                                   │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status crio --all --full --no-pager            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │                     │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat crio --no-pager                            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo crio config                                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p kubenet-218000                                                               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ image   │ newest-cni-042100 image list --format=json                                      │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ pause   │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ unpause │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ delete  │ -p newest-cni-042100                                                            │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ delete  │ -p newest-cni-042100                                                            │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	W1205 08:03:44.511207    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:46.513793    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	Log file created at: 2025/12/05 08:03:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:03:48.079593    6576 out.go:360] Setting OutFile to fd 1628 ...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.133685    6576 out.go:374] Setting ErrFile to fd 1512...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.149881    6576 out.go:368] Setting JSON to false
	I1205 08:03:48.152825    6576 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13085,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:03:48.152825    6576 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:03:48.159945    6576 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:03:48.164658    6576 notify.go:221] Checking for updates...
	I1205 08:03:48.167308    6576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:48.170547    6576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:03:48.173264    6576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:03:48.177277    6576 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:03:48.179134    6576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:03:48.182963    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:48.184223    6576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:03:48.306826    6576 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:03:48.310816    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.562528    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.540004205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.565521    6576 out.go:179] * Using the docker driver based on existing profile
	I1205 08:03:48.568528    6576 start.go:309] selected driver: docker
	I1205 08:03:48.568528    6576 start.go:927] validating driver "docker" against &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.568528    6576 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:03:48.621627    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.870676    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.852383077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.870676    6576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 08:03:48.870676    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:03:48.871676    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:03:48.871676    6576 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.874674    6576 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 08:03:48.876674    6576 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:03:48.879674    6576 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:03:48.881674    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:03:48.881674    6576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 08:03:48.924123    6576 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:48.965045    6576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:03:48.965045    6576 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 08:03:49.173795    6576 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:49.174041    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 08:03:49.174210    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 08:03:49.176070    6576 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:03:49.176070    6576 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:49.176610    6576 start.go:364] duration metric: took 539.2µs to acquireMachinesLock for "newest-cni-042100"
	I1205 08:03:49.176876    6576 start.go:96] Skipping create...Using existing machine configuration
	I1205 08:03:49.176954    6576 fix.go:54] fixHost starting: 
	I1205 08:03:49.185185    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:49.467905    6576 fix.go:112] recreateIfNeeded on newest-cni-042100: state=Stopped err=<nil>
	W1205 08:03:49.468085    6576 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 08:03:46.247259    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:48.745542    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:50.273234    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:48.514113    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:50.532984    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:53.014533    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:49.492567    6576 out.go:252] * Restarting existing docker container for "newest-cni-042100" ...
	I1205 08:03:49.497575    6576 cli_runner.go:164] Run: docker start newest-cni-042100
	I1205 08:03:50.779131    6576 cli_runner.go:217] Completed: docker start newest-cni-042100: (1.2815354s)
	I1205 08:03:50.788112    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:51.139299    6576 kic.go:430] container "newest-cni-042100" state is running.
	I1205 08:03:51.164376    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:51.273747    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:51.276892    6576 machine.go:94] provisionDockerMachine start ...
	I1205 08:03:51.284394    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:51.396042    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:51.397040    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:51.397040    6576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:03:51.400042    6576 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:03:52.385305    6576 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.385658    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 08:03:52.385720    6576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.211458s
	I1205 08:03:52.385800    6576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 08:03:52.435659    6576 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.435659    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 08:03:52.435659    6576 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2613971s
	I1205 08:03:52.435659    6576 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 08:03:52.467883    6576 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.468216    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 08:03:52.468216    6576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.2939732s
	I1205 08:03:52.468216    6576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 08:03:52.472465    6576 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.472465    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 08:03:52.472465    6576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2982024s
	I1205 08:03:52.472465    6576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 08:03:52.472991    6576 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.473088    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 08:03:52.473088    6576 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2988253s
	I1205 08:03:52.473088    6576 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 08:03:52.478918    6576 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.479537    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 08:03:52.479537    6576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3052743s
	I1205 08:03:52.479537    6576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 08:03:52.488107    6576 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.489284    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 08:03:52.489284    6576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.3150206s
	I1205 08:03:52.489284    6576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 08:03:52.587256    6576 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.588098    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 08:03:52.588098    6576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.413907s
	I1205 08:03:52.588098    6576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 08:03:52.588098    6576 cache.go:87] Successfully saved all images to host disk.
	W1205 08:03:50.818460    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	I1205 08:03:53.244351    4412 pod_ready.go:94] pod "coredns-66bc5c9577-zrgxp" is "Ready"
	I1205 08:03:53.244351    4412 pod_ready.go:86] duration metric: took 21.0105368s for pod "coredns-66bc5c9577-zrgxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.250834    4412 pod_ready.go:83] waiting for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.262503    4412 pod_ready.go:94] pod "etcd-bridge-218000" is "Ready"
	I1205 08:03:53.262503    4412 pod_ready.go:86] duration metric: took 11.6685ms for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.271087    4412 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.281426    4412 pod_ready.go:94] pod "kube-apiserver-bridge-218000" is "Ready"
	I1205 08:03:53.281426    4412 pod_ready.go:86] duration metric: took 10.3388ms for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.286385    4412 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.438718    4412 pod_ready.go:94] pod "kube-controller-manager-bridge-218000" is "Ready"
	I1205 08:03:53.438718    4412 pod_ready.go:86] duration metric: took 152.3311ms for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.641268    4412 pod_ready.go:83] waiting for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.039664    4412 pod_ready.go:94] pod "kube-proxy-8r4gs" is "Ready"
	I1205 08:03:54.039664    4412 pod_ready.go:86] duration metric: took 398.3895ms for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.241161    4412 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:94] pod "kube-scheduler-bridge-218000" is "Ready"
	I1205 08:03:54.641085    4412 pod_ready.go:86] duration metric: took 399.9175ms for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:40] duration metric: took 32.4419039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:54.749081    4412 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:03:54.754768    4412 out.go:179] * Done! kubectl is now configured to use "bridge-218000" cluster and "default" namespace by default
	W1205 08:03:55.516894    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:58.012284    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:54.578463    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.578463    6576 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 08:03:54.583153    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.645702    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.646148    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.646193    6576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 08:03:54.866524    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.872867    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.933417    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.934199    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.934272    6576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:03:55.129977    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:55.129977    6576 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:03:55.129977    6576 ubuntu.go:190] setting up certificates
	I1205 08:03:55.129977    6576 provision.go:84] configureAuth start
	I1205 08:03:55.133735    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:55.190185    6576 provision.go:143] copyHostCerts
	I1205 08:03:55.190185    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:03:55.190185    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:03:55.190984    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:03:55.191986    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:03:55.191986    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:03:55.192251    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:03:55.193178    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:03:55.193178    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:03:55.193462    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:03:55.194234    6576 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 08:03:55.277216    6576 provision.go:177] copyRemoteCerts
	I1205 08:03:55.282373    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:03:55.285821    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.350220    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:55.476652    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:03:55.511250    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:03:55.546706    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 08:03:55.583614    6576 provision.go:87] duration metric: took 453.6304ms to configureAuth
	I1205 08:03:55.583614    6576 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:03:55.585275    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:55.589206    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.651189    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.652212    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.652246    6576 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:03:55.836329    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:03:55.837449    6576 ubuntu.go:71] root file system type: overlay
	I1205 08:03:55.837646    6576 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:03:55.841558    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.910453    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.911069    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.911069    6576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:03:56.123635    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:03:56.128031    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.191540    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:56.191765    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:56.191765    6576 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:03:56.396364    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:56.396364    6576 machine.go:97] duration metric: took 5.1193899s to provisionDockerMachine
	I1205 08:03:56.396364    6576 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 08:03:56.396897    6576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:03:56.402233    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:03:56.406223    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.460168    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.609105    6576 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:03:56.617925    6576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:03:56.617925    6576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:03:56.618732    6576 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:03:56.623542    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:03:56.637899    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:03:56.671787    6576 start.go:296] duration metric: took 274.8468ms for postStartSetup
	I1205 08:03:56.675921    6576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:03:56.678948    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.735289    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.884826    6576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:03:56.893835    6576 fix.go:56] duration metric: took 7.7168367s for fixHost
	I1205 08:03:56.893835    6576 start.go:83] releasing machines lock for "newest-cni-042100", held for 7.7169474s
	I1205 08:03:56.896826    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:56.959384    6576 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:03:56.965413    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.966255    6576 ssh_runner.go:195] Run: cat /version.json
	I1205 08:03:56.973872    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:57.022198    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:57.026201    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	W1205 08:03:57.148711    6576 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:03:57.162212    6576 ssh_runner.go:195] Run: systemctl --version
	I1205 08:03:57.181097    6576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:03:57.193288    6576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:03:57.197753    6576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:03:57.214357    6576 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 08:03:57.214357    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.214357    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.214357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:57.242461    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:03:57.262818    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 08:03:57.264705    6576 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:03:57.264749    6576 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:03:57.282712    6576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:03:57.286891    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:57.310466    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.333091    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:57.356105    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.377603    6576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:57.401090    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:57.423330    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:57.445407    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:57.472206    6576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:57.488210    6576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:57.505210    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:57.657790    6576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:57.802417    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.802417    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.807146    6576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:57.832467    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.857712    6576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:57.930272    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.960276    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:57.984286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:58.017277    6576 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:58.032288    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:58.048281    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:03:58.077282    6576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:58.275290    6576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:58.457293    6576 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:58.457293    6576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:58.486286    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:58.509287    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:58.648318    6576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:04:00.173930    6576 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5255881s)
	I1205 08:04:00.177929    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:04:00.201541    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:04:00.228851    6576 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 08:04:00.259044    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:00.283032    6576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:04:00.429299    6576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:04:00.593446    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.738544    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:04:00.766865    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:04:00.791407    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.930315    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:04:01.041317    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:01.059628    6576 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:04:01.064630    6576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:04:01.072635    6576 start.go:564] Will wait 60s for crictl version
	I1205 08:04:01.076636    6576 ssh_runner.go:195] Run: which crictl
	I1205 08:04:01.090615    6576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:04:01.132099    6576 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:04:01.136068    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.182106    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.227459    6576 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 08:04:01.231071    6576 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 08:04:01.375969    6576 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:04:01.379962    6576 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:04:01.387350    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.408320    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:01.468320    6576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 08:04:00.335905    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:00.512126    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:04:03.018493    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:04:01.471323    6576 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:04:01.471323    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:04:01.475324    6576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:04:01.511342    6576 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:04:01.512362    6576 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:04:01.512362    6576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 08:04:01.512362    6576 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:04:01.515327    6576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:04:01.600646    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:04:01.600646    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:04:01.600646    6576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 08:04:01.600646    6576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:04:01.600646    6576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:04:01.604645    6576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 08:04:01.617663    6576 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:04:01.621646    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:04:01.634708    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 08:04:01.659457    6576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 08:04:01.681516    6576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 08:04:01.709549    6576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:04:01.717165    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.737936    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:01.886462    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:01.908845    6576 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 08:04:01.908845    6576 certs.go:195] generating shared ca certs ...
	I1205 08:04:01.908845    6576 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:01.910250    6576 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:04:01.910428    6576 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:04:01.910428    6576 certs.go:257] generating profile certs ...
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 08:04:01.911645    6576 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 08:04:01.912393    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:04:01.912708    6576 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:04:01.912818    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:04:01.913766    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:04:01.914884    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:04:01.946745    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:04:01.978670    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:04:02.020771    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:04:02.052789    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 08:04:02.083785    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:04:02.111686    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:04:02.138106    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:04:02.167957    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:04:02.197699    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:04:02.228974    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:04:02.258542    6576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:04:02.283541    6576 ssh_runner.go:195] Run: openssl version
	I1205 08:04:02.296537    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.312534    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:04:02.327543    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.334539    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.339544    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.392223    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:04:02.408977    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.424981    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:04:02.439981    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.446982    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.451985    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.500175    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:04:02.518368    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.537597    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:04:02.555653    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.562656    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.566659    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.617005    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:04:02.635329    6576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:04:02.649383    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 08:04:02.697863    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 08:04:02.747535    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 08:04:02.802236    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 08:04:02.853222    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 08:04:02.901642    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 08:04:02.946962    6576 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:04:02.951256    6576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:04:02.986478    6576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:04:02.999955    6576 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 08:04:02.999955    6576 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 08:04:03.003999    6576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 08:04:03.019291    6576 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 08:04:03.022819    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.083372    6576 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.084185    6576 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042100" cluster setting kubeconfig missing "newest-cni-042100" context setting]
	I1205 08:04:03.084741    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.109144    6576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 08:04:03.128232    6576 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 08:04:03.138905    6576 kubeadm.go:602] duration metric: took 138.9481ms to restartPrimaryControlPlane
	I1205 08:04:03.138905    6576 kubeadm.go:403] duration metric: took 191.9404ms to StartCluster
	I1205 08:04:03.138905    6576 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.138905    6576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.141698    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.142419    6576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:04:03.142419    6576 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:04:03.142419    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:04:03.163290    6576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting dashboard=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon dashboard=true in "newest-cni-042100"
	W1205 08:04:03.163290    6576 addons.go:248] addon dashboard should already be in state true
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.192363    6576 out.go:179] * Verifying Kubernetes components...
	I1205 08:04:03.249622    6576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-042100"
	I1205 08:04:03.250609    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.257607    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.258609    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:03.261608    6576 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 08:04:03.264610    6576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:04:03.309607    6576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.309607    6576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:04:03.312609    6576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.312609    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:04:03.312609    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.315610    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.318607    6576 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 08:04:03.510751    7752 pod_ready.go:94] pod "coredns-66bc5c9577-gsfxl" is "Ready"
	I1205 08:04:03.510751    7752 pod_ready.go:86] duration metric: took 25.5102081s for pod "coredns-66bc5c9577-gsfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.517746    7752 pod_ready.go:83] waiting for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.529764    7752 pod_ready.go:94] pod "etcd-kubenet-218000" is "Ready"
	I1205 08:04:03.529764    7752 pod_ready.go:86] duration metric: took 12.0185ms for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.535749    7752 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.544756    7752 pod_ready.go:94] pod "kube-apiserver-kubenet-218000" is "Ready"
	I1205 08:04:03.544756    7752 pod_ready.go:86] duration metric: took 9.007ms for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.549745    7752 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.706418    7752 pod_ready.go:94] pod "kube-controller-manager-kubenet-218000" is "Ready"
	I1205 08:04:03.706418    7752 pod_ready.go:86] duration metric: took 156.6708ms for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.906896    7752 pod_ready.go:83] waiting for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.305526    7752 pod_ready.go:94] pod "kube-proxy-l9mnz" is "Ready"
	I1205 08:04:04.305526    7752 pod_ready.go:86] duration metric: took 398.0934ms for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.506453    7752 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:94] pod "kube-scheduler-kubenet-218000" is "Ready"
	I1205 08:04:04.908413    7752 pod_ready.go:86] duration metric: took 401.8894ms for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:40] duration metric: took 37.4190345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:04:05.004707    7752 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:04:05.007705    7752 out.go:179] * Done! kubectl is now configured to use "kubenet-218000" cluster and "default" namespace by default
	I1205 08:04:03.344609    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 08:04:03.344609    6576 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 08:04:03.353008    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.373762    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.389748    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.415749    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.454747    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:03.481745    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.544756    6576 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:04:03.550761    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:03.552751    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.556766    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 08:04:03.556766    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 08:04:03.561743    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.627813    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 08:04:03.627923    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 08:04:03.654463    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 08:04:03.654463    6576 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 08:04:03.731575    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 08:04:03.731654    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1205 08:04:03.751356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.751356    6576 retry.go:31] will retry after 148.467646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.754346    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	W1205 08:04:03.755354    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.755354    6576 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 08:04:03.755354    6576 retry.go:31] will retry after 202.130528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.774491    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 08:04:03.774491    6576 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 08:04:03.793803    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 08:04:03.793803    6576 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 08:04:03.828295    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 08:04:03.828351    6576 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 08:04:03.851355    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.851355    6576 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 08:04:03.876402    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.905217    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:03.957742    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.957742    6576 retry.go:31] will retry after 291.655688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.962256    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:03.992521    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.992521    6576 retry.go:31] will retry after 561.792628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.049441    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:04.057481    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.057556    6576 retry.go:31] will retry after 288.112081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.254701    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.343216    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.343216    6576 retry.go:31] will retry after 359.979776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.350062    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:04.431174    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.431174    6576 retry.go:31] will retry after 483.679942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.549772    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:04.559147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:04.642871    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.642871    6576 retry.go:31] will retry after 528.970083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.708123    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.787283    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.787283    6576 retry.go:31] will retry after 459.684582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.919229    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.004707    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.004707    6576 retry.go:31] will retry after 831.823948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.050298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.177969    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:05.252148    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:05.268807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.268914    6576 retry.go:31] will retry after 1.219301827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:05.381615    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.381684    6576 retry.go:31] will retry after 1.003502336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.548840    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.841493    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.945714    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.945714    6576 retry.go:31] will retry after 1.344373684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.051495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:06.390219    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:06.476859    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.476859    6576 retry.go:31] will retry after 916.677354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.493513    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:06.550586    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:06.586142    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.586142    6576 retry.go:31] will retry after 814.667109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.049968    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:07.295279    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:07.385161    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.385225    6576 retry.go:31] will retry after 2.309719888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.397737    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:07.404241    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.24760459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.229405263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.550637    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.050329    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:10.375252    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:04:08.551330    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.052416    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.549628    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.699045    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:09.722067    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:09.740066    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:09.854063    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.854063    6576 retry.go:31] will retry after 1.718952919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.926061    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.926061    6576 retry.go:31] will retry after 2.401961347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.960056    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.961057    6576 retry.go:31] will retry after 3.751594778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:10.049061    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:10.549298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.049797    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.550139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.577133    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:11.663155    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:11.663155    6576 retry.go:31] will retry after 4.120114825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.049572    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:12.333014    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:12.419653    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.419653    6576 retry.go:31] will retry after 2.740389125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.549673    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.050128    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.549901    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.717839    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:13.806807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:13.806807    6576 retry.go:31] will retry after 4.752661147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:14.050521    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:14.551720    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.050682    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.165926    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:15.256271    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.256271    6576 retry.go:31] will retry after 4.534312748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.549805    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.787818    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:15.865098    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.865628    6576 retry.go:31] will retry after 5.383695211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:16.050434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:16.549442    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.049923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.550083    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.049667    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:19.104488    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 08:04:19.104793    4560 node_ready.go:38] duration metric: took 6m0.001013s for node "no-preload-104100" to be "Ready" ...
	I1205 08:04:19.107356    4560 out.go:203] 
	W1205 08:04:19.110511    4560 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 08:04:19.110554    4560 out.go:285] * 
	W1205 08:04:19.112383    4560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:04:19.116573    4560 out.go:203] 
	I1205 08:04:18.551343    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.565349    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:18.647263    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:18.647263    6576 retry.go:31] will retry after 8.382323881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.050424    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.796280    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:19.904265    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.904265    6576 retry.go:31] will retry after 5.117792571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:20.052293    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:20.550380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.052677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.255736    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:21.356356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.356356    6576 retry.go:31] will retry after 8.875197166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.550333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.049310    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.550338    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.050244    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.551039    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.050874    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.550399    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:25.027043    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:25.050989    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:25.159593    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.159593    6576 retry.go:31] will retry after 7.802785807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.553440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.050359    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.551986    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:27.034606    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:27.050924    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:27.141503    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.141551    6576 retry.go:31] will retry after 13.674183061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.553694    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.049210    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.550842    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.051091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.549571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.051474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.237147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:30.345143    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.345143    6576 retry.go:31] will retry after 18.684554823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.552505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.050974    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.550315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.053025    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.550841    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.967139    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:33.050008    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:33.074001    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.074001    6576 retry.go:31] will retry after 21.457353412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.550375    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.053598    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.050034    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.050947    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.552933    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.049827    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.551205    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.050234    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.552156    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.050748    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.549737    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.050549    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.550949    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.819283    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:40.946292    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:40.946292    6576 retry.go:31] will retry after 18.180546633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:41.051295    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:41.551923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.051010    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.550802    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.050090    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.549595    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.050323    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.551060    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.050284    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.549318    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.049045    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.550390    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.050869    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.549920    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.050040    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:49.037573    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:49.050392    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:49.132808    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.132808    6576 retry.go:31] will retry after 12.282235903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.549952    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.052465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.550412    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.053026    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.551123    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.050959    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.550243    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.051085    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.550766    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.053585    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.537931    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:54.551106    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:54.662326    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:54.662326    6576 retry.go:31] will retry after 25.982171867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:55.050927    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:55.551197    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.049847    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.551717    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.050571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.552306    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.050495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.550960    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.050091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.133373    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:59.223117    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.223117    6576 retry.go:31] will retry after 23.551015037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.551231    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.047738    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.550465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.051875    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.420389    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:01.505728    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.505728    6576 retry.go:31] will retry after 17.206812229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.551821    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.051028    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.550994    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.051369    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.550326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:03.585938    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.585938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:03.590134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:03.617879    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.617879    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:03.624332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:03.651940    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.651940    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:03.656120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:03.685733    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.685733    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:03.690030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:03.719658    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.719713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:03.723576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:03.755797    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.755797    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:03.760966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:03.789461    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.789461    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:03.793178    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:03.823147    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.823147    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:03.823147    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:03.823679    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:03.890829    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:03.890829    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:03.937573    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:03.937573    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:04.028268    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:04.028268    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:04.028268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:04.054265    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:04.054265    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.624597    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:06.650113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:06.681568    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.682088    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:06.685527    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:06.715181    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.715181    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:06.718768    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:06.748649    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.748692    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:06.752313    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:06.783519    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.783582    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:06.787257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:06.817858    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.817858    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:06.821703    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:06.854241    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.854241    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:06.857773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:06.888901    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.888901    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:06.894071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:06.923675    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.923675    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:06.923675    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:06.923675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.974113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:06.974166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:07.037689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:07.037689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:07.080588    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:07.080588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:07.171034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:07.171067    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:07.171067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:09.706054    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:09.732108    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:09.767273    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.767300    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:09.770837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:09.802479    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.802550    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:09.806320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:09.835537    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.835537    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:09.841566    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:09.874578    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.874578    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:09.878148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:09.906942    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.907017    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:09.910154    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:09.941197    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.941197    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:09.945133    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:09.974591    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.974591    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:09.978698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:10.007749    6576 logs.go:282] 0 containers: []
	W1205 08:05:10.007749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:10.007749    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:10.007749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:10.044236    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:10.044236    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:10.130995    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:10.130995    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:10.130995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:10.158359    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:10.158945    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:10.209053    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:10.209053    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:12.782787    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:12.809043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:12.839958    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.839958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:12.845180    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:12.876657    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.876720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:12.880739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:12.908227    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.908227    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:12.912011    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:12.942400    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.942449    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:12.945431    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:12.973155    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.973155    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:12.976739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:13.004259    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.004259    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:13.008151    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:13.038225    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.038225    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:13.041692    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:13.070500    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.070500    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:13.070500    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:13.070500    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:13.134608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:13.134608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:13.173994    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:13.173994    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:13.270602    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:13.270665    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:13.270665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:13.299297    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:13.299297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:15.870600    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:15.895506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:15.927013    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.927013    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:15.930717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:15.959875    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.959941    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:15.963955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:15.992862    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.992862    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:15.996303    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:16.023966    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.023966    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:16.027786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:16.058698    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.058698    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:16.065246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:16.094826    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.094826    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:16.098650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:16.144774    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.144820    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:16.148422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:16.177296    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.177296    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:16.177296    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:16.177296    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:16.242225    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:16.242225    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:16.283778    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:16.283778    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:16.378623    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:16.378623    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:16.378623    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:16.408736    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:16.409256    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:18.719251    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:18.815541    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:18.815541    6576 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:18.959261    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:18.983847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:19.016048    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.016048    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:19.022913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:19.054693    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.054752    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:19.058555    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:19.087342    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.087342    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:19.090772    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:19.118199    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.118199    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:19.121567    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:19.151346    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.151346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:19.155305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:19.186521    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.186611    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:19.190219    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:19.220730    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.220730    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:19.225064    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:19.255890    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.256013    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:19.256013    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:19.256013    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:19.324476    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:19.324476    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:19.362802    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:19.362802    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:19.443537    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:19.444546    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:19.444546    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:19.474585    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:19.474647    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:20.651307    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:20.735190    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:20.735294    6576 retry.go:31] will retry after 27.405422909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.034778    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:22.060808    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:22.093037    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.093111    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:22.097193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:22.124988    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.125036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:22.128496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:22.157896    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.157947    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:22.161826    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:22.190808    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.190839    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:22.194900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:22.227226    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.227346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:22.230966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:22.260811    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.260861    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:22.264784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:22.295222    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.295331    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:22.302135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:22.343045    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.343116    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:22.343116    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:22.343116    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:22.394026    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:22.394026    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:22.457078    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:22.457078    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:22.498385    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:22.498434    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:22.581112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:22.581112    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:22.581112    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:22.780060    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:05:22.859804    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.859804    6576 retry.go:31] will retry after 21.036491608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:25.113006    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:25.148820    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:25.186604    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.186604    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:25.191401    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:25.223786    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.223867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:25.227359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:25.262253    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.262310    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:25.266030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:25.298397    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.298433    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:25.303771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:25.334112    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.334112    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:25.338565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:25.370125    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.370206    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:25.374513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:25.406130    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.406219    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:25.410417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:25.442663    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.442742    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:25.442742    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:25.442742    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:25.479786    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:25.479786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:25.573308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:25.573308    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:25.573308    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:25.599667    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:25.599667    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:25.650617    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:25.650617    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.218354    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:28.243705    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:28.279022    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.279022    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:28.283525    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:28.313798    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.313798    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:28.318172    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:28.347700    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.347700    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:28.351701    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:28.381257    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.381341    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:28.384917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:28.416041    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.416041    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:28.419541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:28.447349    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.447349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:28.451684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:28.479275    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.479307    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:28.483095    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:28.511115    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.511187    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:28.511187    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:28.511237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.574706    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:28.574706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:28.615541    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:28.615541    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:28.709604    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:28.709604    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:28.709604    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:28.738815    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:28.738815    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.300476    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:31.328202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:31.357921    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.357958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:31.361905    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:31.390844    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.390926    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:31.395488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:31.426488    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.426570    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:31.430048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:31.461632    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.461687    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:31.465105    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:31.492594    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.492657    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:31.496042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:31.523806    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.523834    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:31.527758    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:31.557959    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.558020    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:31.561776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:31.588451    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.588485    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:31.588513    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:31.588535    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:31.675984    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:31.675984    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:31.675984    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:31.706483    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:31.706567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.753154    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:31.753677    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:31.813379    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:31.813379    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.359731    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:34.386737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:34.416273    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.416306    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:34.419220    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:34.452145    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.452661    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:34.456139    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:34.486541    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.486593    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:34.489738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:34.520642    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.520642    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:34.524007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:34.556848    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.556848    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:34.560551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:34.589976    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.589976    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:34.594061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:34.623871    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.623871    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:34.627661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:34.655428    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.655428    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:34.655428    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:34.655428    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.693248    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:34.693248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:34.782095    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:34.782095    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:34.782095    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:34.809243    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:34.809243    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:34.859486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:34.859486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.427533    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:37.454695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:37.485702    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.485702    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:37.489329    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:37.522074    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.522074    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:37.525283    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:37.555534    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.555534    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:37.559473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:37.589923    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.589923    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:37.593340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:37.625230    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.625230    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:37.628764    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:37.658722    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.658722    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:37.661870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:37.693003    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.693003    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:37.696992    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:37.726216    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.726286    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:37.726286    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:37.726333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.791305    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:37.791305    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:37.829600    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:37.829600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:37.920892    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:37.920892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:37.920892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:37.947989    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:37.947989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:40.501988    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:40.527784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:40.563590    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.563590    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:40.567375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:40.598332    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.598332    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:40.602019    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:40.629289    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.629289    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:40.633378    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:40.660574    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.660630    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:40.664275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:40.691063    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.691063    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:40.694694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:40.723611    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.723667    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:40.726975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:40.755155    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.755155    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:40.759134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:40.793723    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.793723    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:40.793723    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:40.793723    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:40.831198    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:40.831198    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:40.925587    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:40.925587    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:40.925587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:40.954081    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:40.954114    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:41.007048    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:41.007096    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:43.582160    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:43.607539    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:43.638277    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.638277    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:43.642375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:43.675099    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.675099    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:43.678089    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:43.706803    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.706803    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:43.713114    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:43.740522    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.740522    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:43.744411    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:43.773724    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.773780    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:43.777763    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:43.803962    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.803962    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:43.807698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:43.839559    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.839559    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:43.843918    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:43.876174    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.876252    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:43.876252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:43.876252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:43.902671    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:05:43.934973    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:43.934973    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 08:05:43.999146    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:43.999146    6576 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:44.032735    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:44.033740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:44.075384    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:44.075384    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:44.157223    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:44.157223    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:44.157223    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:46.691333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:46.717072    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:46.748595    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.748595    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:46.752218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:46.780374    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.780374    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:46.783922    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:46.815066    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.815066    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:46.818942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:46.847510    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.847563    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:46.851012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:46.883362    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.883465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:46.886941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:46.916379    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.916451    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:46.920641    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:46.949114    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.949114    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:46.953549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:46.983164    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.983164    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:46.983164    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:46.983164    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:47.022255    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:47.022255    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:47.111784    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:47.111860    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:47.111860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:47.138559    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:47.138559    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:47.188823    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:47.189346    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:48.147422    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:48.239875    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:48.239875    6576 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:48.242898    6576 out.go:179] * Enabled addons: 
	I1205 08:05:48.245836    6576 addons.go:530] duration metric: took 1m45.1017438s for enable addons: enabled=[]
	I1205 08:05:49.757493    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:49.785573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:49.818757    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.818757    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:49.822359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:49.849919    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.849919    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:49.853892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:49.881451    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.881451    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:49.884508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:49.916549    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.916599    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:49.922025    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:49.955857    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.955857    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:49.959871    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:49.992747    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.992747    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:49.997745    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:50.027985    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.027985    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:50.032696    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:50.066315    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.066315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:50.066315    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:50.066315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:50.162764    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:50.162764    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:50.162764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:50.190807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:50.190807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:50.244357    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:50.244357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:50.306832    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:50.306832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:52.850828    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:52.881404    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:52.914164    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.914164    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:52.919056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:52.946339    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.946339    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:52.950249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:52.977159    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.977159    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:52.981587    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:53.011126    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.011126    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:53.016170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:53.050900    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.050900    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:53.055929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:53.086492    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.086492    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:53.091422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:53.123587    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.123587    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:53.126586    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:53.155525    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.155525    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:53.155525    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:53.155525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:53.220198    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:53.221197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:53.261683    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:53.261683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:53.355432    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:53.355432    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:53.355432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:53.386521    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:53.386521    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:55.947613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:55.973795    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:56.007916    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.007916    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:56.011792    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:56.045094    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.045094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:56.048513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:56.082501    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.082501    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:56.086603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:56.116918    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.117005    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:56.120916    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:56.150716    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.150716    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:56.154101    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:56.186882    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.186882    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:56.190500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:56.223741    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.223741    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:56.227290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:56.255902    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.255902    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:56.255902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:56.255902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:56.285180    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:56.285180    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:56.333650    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:56.333650    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:56.393332    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:56.393332    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:56.432841    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:56.432841    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:56.521419    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.025923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:59.056473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:59.091893    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.091909    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:59.095650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:59.128079    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.128185    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:59.131611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:59.159655    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.159655    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:59.163348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:59.192422    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.192422    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:59.196339    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:59.226737    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.226737    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:59.230776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:59.258194    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.258194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:59.261784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:59.292592    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.292592    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:59.296370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:59.323764    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.323764    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:59.323764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:59.323764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:59.375689    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:59.376207    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:59.440586    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:59.440586    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:59.479856    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:59.479856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:59.578161    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.578161    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:59.578161    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.111153    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:02.137611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:02.172231    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.172231    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:02.176271    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:02.208274    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.208274    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:02.211990    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:02.244184    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.244245    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:02.247661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:02.278388    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.278388    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:02.282228    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:02.312290    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.312290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:02.316470    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:02.345487    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.345487    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:02.349444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:02.378305    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.378305    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:02.381923    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:02.409737    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.409737    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:02.409737    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:02.409737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:02.477029    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:02.477029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:02.517422    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:02.517422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:02.605249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:02.605249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:02.605249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.632767    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:02.632828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:05.196182    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:05.221488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:05.251281    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.251355    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:05.254854    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:05.284103    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.284103    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:05.288076    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:05.315552    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.315552    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:05.319409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:05.347664    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.347664    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:05.351387    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:05.382685    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.382685    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:05.386801    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:05.416816    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.416816    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:05.421471    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:05.451265    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.451350    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:05.455129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:05.486455    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.486455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:05.486455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:05.486455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:05.548252    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:05.548252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:05.586103    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:05.586103    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:05.689902    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:05.689902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:05.689902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:05.715463    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:05.715463    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:08.298546    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:08.325694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:08.358357    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.358427    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:08.362535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:08.393631    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.393631    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:08.397365    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:08.429162    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.429162    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:08.433444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:08.464672    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.464672    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:08.467810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:08.496450    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.496450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:08.499640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:08.526246    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.526246    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:08.530507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:08.558130    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.558130    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:08.561856    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:08.590753    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.590753    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:08.590753    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:08.590753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:08.656049    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:08.656049    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:08.697268    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:08.697268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:08.794510    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:08.794510    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:08.794510    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:08.839662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:08.839734    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:11.394677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:11.423727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:11.453346    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.453346    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:11.460955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:11.498834    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.498834    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:11.498834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:11.532657    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.532657    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:11.540987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:11.575759    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.575786    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:11.579561    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:11.612047    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.612102    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:11.615579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:11.644318    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.644370    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:11.648326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:11.678026    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.678026    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:11.681899    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:11.711631    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.711631    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:11.711631    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:11.711631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:11.772905    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:11.772905    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:11.814639    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:11.814639    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:11.905607    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:11.905657    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:11.905700    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:11.934717    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:11.935238    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:14.488836    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:14.512857    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:14.546571    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.546571    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:14.549903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:14.580887    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.580887    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:14.584967    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:14.630312    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.630312    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:14.633809    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:14.667373    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.667373    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:14.671026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:14.699813    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.699813    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:14.703177    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:14.734619    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.734619    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:14.739056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:14.769129    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.769129    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:14.773030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:14.803689    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.803689    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:14.803689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:14.803689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:14.841923    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:14.841923    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:14.932570    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:14.932570    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:14.932570    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:14.961067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:14.961591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:15.010912    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:15.010953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:17.575458    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:17.603741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:17.636367    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.636367    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:17.640529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:17.668380    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.668380    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:17.672111    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:17.700544    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.700544    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:17.704634    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:17.736823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.736823    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:17.741002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:17.770125    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.770125    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:17.775816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:17.812823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.812823    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:17.815683    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:17.844895    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.844895    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:17.849115    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:17.880706    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.880706    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:17.880706    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:17.880706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:17.969171    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:17.969171    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:17.969263    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:17.995396    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:17.995396    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:18.044466    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:18.044466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:18.105721    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:18.105721    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:20.651671    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:20.679273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:20.707727    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.707727    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:20.711373    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:20.741891    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.741891    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:20.746073    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:20.777260    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.777260    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:20.780520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:20.816982    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.816982    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:20.820520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:20.850461    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.850461    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:20.854205    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:20.882429    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.882429    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:20.886920    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:20.914179    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.914179    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:20.917831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:20.949708    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.949708    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:20.949708    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:20.949708    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:21.013967    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:21.013967    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:21.053946    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:21.053946    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:21.140482    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:21.141002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:21.141002    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:21.170239    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:21.170239    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:23.729627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:23.758686    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:23.791537    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.791594    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:23.796131    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:23.827894    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.827894    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:23.832419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:23.862718    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.862718    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:23.867837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:23.896272    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.896272    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:23.900193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:23.929016    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.929078    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:23.932778    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:23.962372    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.962447    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:23.966147    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:23.998472    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.998472    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:24.004351    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:24.033564    6576 logs.go:282] 0 containers: []
	W1205 08:06:24.033564    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:24.033564    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:24.033564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:24.099505    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:24.099505    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:24.139900    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:24.139900    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:24.233474    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:24.233474    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:24.233474    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:24.263408    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:24.263408    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:26.816321    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:26.841457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:26.872936    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.872992    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:26.876345    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:26.908512    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.908580    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:26.912736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:26.944068    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.944068    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:26.947603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:26.975323    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.975360    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:26.978941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:27.008708    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.008751    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:27.012371    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:27.044160    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.044225    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:27.047780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:27.078172    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.078172    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:27.081803    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:27.111287    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.111370    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:27.111370    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:27.111435    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:27.161265    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:27.161329    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:27.221473    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:27.221473    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:27.263907    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:27.263907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:27.357876    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:27.357876    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:27.357876    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:29.890252    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:29.916690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:29.946274    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.946274    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:29.950679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:29.979149    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.979149    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:29.982229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:30.010085    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.010085    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:30.014016    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:30.043254    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.043254    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:30.048048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:30.080613    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.080613    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:30.084300    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:30.114627    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.114627    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:30.118584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:30.147947    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.148009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:30.151166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:30.180743    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.180828    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:30.180828    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:30.180828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:30.244646    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:30.244646    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:30.286079    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:30.286079    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:30.376557    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:30.376557    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:30.376557    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:30.405737    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:30.405737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:32.958550    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:32.987728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:33.018308    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.018370    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:33.022062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:33.052435    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.052435    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:33.056434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:33.085355    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.085426    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:33.089343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:33.121676    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.121737    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:33.125504    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:33.157765    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.157765    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:33.161892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:33.191061    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.191061    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:33.194930    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:33.223173    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.223173    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:33.226650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:33.257481    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.257481    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:33.257481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:33.257481    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:33.301467    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:33.301467    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:33.389528    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:33.389528    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:33.389528    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:33.418631    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:33.418631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:33.465106    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:33.465185    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.034296    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:36.063459    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:36.095210    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.095210    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:36.098565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:36.127708    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.127786    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:36.131615    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:36.159964    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.159964    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:36.163771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:36.192604    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.192604    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:36.196679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:36.224877    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.224958    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:36.228553    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:36.258280    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.258280    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:36.261911    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:36.294140    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.294140    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:36.298273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:36.329657    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.329657    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:36.329657    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:36.329657    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:36.387784    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:36.387784    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.452385    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:36.452385    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:36.493394    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:36.493394    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:36.591485    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:36.591485    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:36.591567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.124474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:39.152578    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:39.183392    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.183392    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:39.187028    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:39.216193    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.216193    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:39.219743    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:39.251680    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.251759    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:39.255869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:39.283843    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.283843    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:39.287237    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:39.316021    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.316021    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:39.319015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:39.349194    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.349194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:39.352951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:39.403729    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.403729    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:39.411012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:39.442909    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.442909    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:39.442909    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:39.442909    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:39.509174    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:39.509174    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:39.550483    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:39.550483    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:39.650354    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:39.650354    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:39.650354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.676786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:39.676786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.228069    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:42.258786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:42.290791    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.290791    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:42.294739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:42.326094    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.326094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:42.329725    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:42.356052    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.356052    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:42.359752    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:42.390464    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.390464    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:42.393935    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:42.421882    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.421882    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:42.426609    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:42.457036    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.457036    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:42.460988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:42.486064    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.486064    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:42.491250    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:42.521748    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.521748    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:42.521748    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:42.521748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:42.551195    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:42.552197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.613626    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:42.613683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:42.678856    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:42.679856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:42.719297    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:42.719297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:42.811034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:45.316640    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:45.343574    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:45.372899    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.372899    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:45.376229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:45.408264    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.408264    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:45.412119    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:45.440697    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.440697    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:45.444501    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:45.471692    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.471727    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:45.475496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:45.508400    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.508450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:45.512541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:45.544177    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.544233    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:45.548858    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:45.579165    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.579165    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:45.582164    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:45.623052    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.623052    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:45.623052    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:45.623052    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:45.651554    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:45.651554    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:45.701716    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:45.701768    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:45.766248    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:45.766248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:45.806341    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:45.806341    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:45.895675    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.401571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:48.432481    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:48.466418    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.466418    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:48.471424    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:48.503617    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.503617    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:48.507677    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:48.541480    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.541480    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:48.547529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:48.579177    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.579177    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:48.585087    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:48.626465    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.626465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:48.630533    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:48.660304    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.660304    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:48.663999    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:48.694957    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.694957    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:48.699665    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:48.725908    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.725908    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:48.725908    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:48.725908    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:48.817395    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.817466    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:48.817466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:48.848226    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:48.848739    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:48.900060    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:48.900060    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:48.962797    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:48.962797    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.508647    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:51.536278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:51.573226    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.573323    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:51.578061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:51.614603    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.614603    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:51.619576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:51.647095    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.647095    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:51.652535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:51.680320    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.680369    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:51.684269    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:51.717798    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.717827    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:51.721877    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:51.750482    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.750482    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:51.754602    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:51.786216    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.786216    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:51.790834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:51.819030    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.819030    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:51.819030    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:51.819030    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:51.876069    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:51.876110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:51.938469    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:51.938469    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.980953    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:51.980953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:52.079938    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:52.079938    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:52.079938    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:54.616891    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:54.642146    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:54.675691    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.675691    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:54.679440    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:54.709522    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.709522    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:54.713343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:54.744053    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.744112    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:54.748148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:54.782163    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.782232    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:54.786128    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:54.817067    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.817067    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:54.820867    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:54.850003    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.850003    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:54.854439    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:54.882517    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.882566    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:54.886475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:54.917057    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.917057    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:54.917057    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:54.917057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:54.982333    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:54.982333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:55.023534    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:55.023534    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:55.136747    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:55.136823    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:55.136823    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:55.169237    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:55.169237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:57.723958    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:57.750382    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:57.784932    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.784932    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:57.788837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:57.815350    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.815350    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:57.819773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:57.850513    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.850513    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:57.854585    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:57.885405    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.885405    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:57.889340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:57.917143    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.917143    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:57.921061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:57.947843    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.947843    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:57.951577    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:57.983169    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.983169    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:57.986925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:58.016381    6576 logs.go:282] 0 containers: []
	W1205 08:06:58.016381    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:58.016381    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:58.016381    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:58.081766    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:58.081766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:58.122021    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:58.122021    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:58.216654    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:58.216654    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:58.216654    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:58.245369    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:58.245369    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:00.814255    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:00.841335    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:00.870336    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.870336    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:00.874294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:00.905321    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.905321    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:00.908814    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:00.940896    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.940896    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:00.944651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:00.975783    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.975855    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:00.979485    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:01.007166    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.007166    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:01.011052    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:01.038708    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.038708    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:01.043766    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:01.072944    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.072944    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:01.076562    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:01.104574    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.104623    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:01.104665    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:01.104665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:01.169748    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:01.169748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:01.210259    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:01.210259    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:01.310310    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:01.310310    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:01.310310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:01.336589    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:01.336589    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:03.889510    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:03.919078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:03.953291    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.953291    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:03.956276    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:03.986975    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.986975    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:03.991157    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:04.022935    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.022935    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:04.026117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:04.058273    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.058312    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:04.061868    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:04.093136    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.093136    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:04.096666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:04.122322    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.122349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:04.126167    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:04.158513    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.158545    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:04.161969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:04.190492    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.190569    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:04.190569    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:04.190569    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:04.259062    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:04.259062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:04.299558    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:04.299558    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:04.393556    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:04.393644    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:04.393644    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:04.420122    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:04.420122    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:06.976110    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:07.001980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:07.033975    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.033975    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:07.040090    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:07.069823    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.069823    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:07.074015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:07.103072    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.103072    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:07.107448    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:07.138770    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.138770    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:07.142987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:07.174660    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.174660    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:07.178913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:07.209719    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.209719    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:07.215472    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:07.243539    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.243539    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:07.248737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:07.279448    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.279448    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:07.279448    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:07.279448    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:07.345481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:07.346489    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:07.384275    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:07.384275    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:07.479588    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:07.479588    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:07.479588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:07.506786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:07.506786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:10.078099    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:10.103951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:10.139034    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.139034    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:10.142691    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:10.174629    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.174629    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:10.178323    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:10.206817    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.206817    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:10.210968    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:10.239729    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.239820    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:10.245043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:10.277712    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.277712    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:10.283741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:10.315362    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.315362    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:10.318268    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:10.346693    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.346693    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:10.350670    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:10.379081    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.379081    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:10.379081    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:10.379081    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:10.443299    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:10.443299    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:10.482497    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:10.482497    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:10.567024    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:10.567024    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:10.567024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:10.596635    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:10.596635    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:13.157670    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:13.186965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:13.222698    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.222730    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:13.226690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:13.261914    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.261957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:13.265780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:13.294590    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.294590    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:13.299066    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:13.329216    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.329216    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:13.334474    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:13.366263    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.366290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:13.369870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:13.398379    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.398379    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:13.402396    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:13.430465    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.430465    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:13.434253    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:13.462873    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.462905    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:13.462905    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:13.462949    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:13.525954    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:13.526955    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:13.566284    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:13.567284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:13.656971    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:13.656971    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:13.656971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:13.684284    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:13.684284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.241440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:16.268513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:16.302653    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.302653    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:16.306429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:16.337387    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.337387    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:16.342004    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:16.371449    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.371449    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:16.376376    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:16.406912    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.406912    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:16.410777    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:16.438875    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.438875    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:16.442983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:16.470299    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.470299    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:16.474336    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:16.504067    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.504067    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:16.508174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:16.536869    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.536869    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:16.536869    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:16.536869    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:16.624673    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:16.624703    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:16.624755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:16.653894    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:16.653894    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.701985    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:16.701985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:16.763148    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:16.763148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.307232    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:19.334513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:19.371034    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.371140    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:19.375038    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:19.403110    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.403186    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:19.407168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:19.435904    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.435904    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:19.440294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:19.470700    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.470700    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:19.474611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:19.502846    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.502915    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:19.506400    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:19.540483    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.540483    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:19.544695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:19.576470    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.576501    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:19.579834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:19.609587    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.609587    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:19.609587    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:19.609587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.653000    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:19.653000    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:19.747787    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:19.747787    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:19.747787    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:19.774804    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:19.774804    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:19.825222    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:19.825338    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.394074    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:22.419163    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:22.454202    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.454202    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:22.457716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:22.487462    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.487615    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:22.491427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:22.522398    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.522398    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:22.526148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:22.554536    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.554536    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:22.558447    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:22.590329    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.590401    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:22.595088    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:22.626553    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.626553    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:22.630372    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:22.658911    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.658911    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:22.662715    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:22.692369    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.692444    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:22.692468    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:22.692468    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.759391    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:22.759391    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:22.801415    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:22.801415    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:22.891643    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:22.891710    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:22.891738    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:22.922662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:22.922662    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:25.480645    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:25.506403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:25.536534    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.536600    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:25.540233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:25.568373    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.568373    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:25.572581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:25.604196    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.604196    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:25.608476    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:25.639923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.640007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:25.643813    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:25.673923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.673923    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:25.677542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:25.709156    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.709156    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:25.712910    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:25.744371    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.744371    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:25.750463    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:25.778113    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.778113    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:25.778113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:25.778113    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:25.842953    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:25.842953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:25.881310    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:25.881310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:25.976920    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:25.976920    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:25.976920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:26.005828    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:26.005889    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:28.568522    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:28.594981    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:28.628025    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.628025    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:28.631569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:28.661047    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.661047    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:28.664662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:28.692667    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.692667    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:28.696624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:28.725878    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.725944    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:28.730056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:28.758073    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.758129    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:28.761794    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:28.788812    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.788812    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:28.793030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:28.839778    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.839778    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:28.843937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:28.873288    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.873288    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:28.873288    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:28.873288    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:28.937414    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:28.937414    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:28.975610    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:28.975610    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:29.110286    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:29.110286    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:29.110286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:29.140120    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:29.140120    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:31.695315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:31.723717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:31.755093    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.755155    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:31.758672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:31.786260    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.786260    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:31.790917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:31.817450    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.817450    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:31.822438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:31.852769    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.852788    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:31.856218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:31.885715    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.885715    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:31.890036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:31.919240    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.919240    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:31.924888    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:31.956860    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.956860    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:31.960848    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:31.989055    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.989055    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:31.989055    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:31.989055    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:32.055751    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:32.055751    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:32.091848    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:32.091848    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:32.183494    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:32.183494    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:32.183494    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:32.211020    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:32.211056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:34.770702    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:34.796134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:34.830020    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.830052    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:34.833506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:34.860829    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.860829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:34.864718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:34.895302    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.895302    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:34.899305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:34.928933    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.928933    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:34.935599    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:34.964256    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.964280    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:34.967945    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:34.995571    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.995571    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:35.001155    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:35.038603    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.038603    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:35.042249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:35.075025    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.075025    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:35.075025    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:35.075025    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:35.136020    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:35.136020    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:35.198233    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:35.198233    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:35.236713    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:35.236713    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:35.327635    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:35.327659    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:35.327659    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:37.859618    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:37.890074    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:37.922724    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.922724    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:37.926571    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:37.959720    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.959720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:37.963770    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:37.991602    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.991602    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:37.995673    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:38.023771    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.023771    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:38.030170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:38.061676    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.061676    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:38.065660    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:38.116492    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.116542    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:38.122475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:38.151483    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.151483    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:38.155624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:38.184512    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.184512    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:38.184512    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:38.184512    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:38.221972    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:38.221972    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:38.315283    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:38.315283    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:38.315283    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:38.342209    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:38.342209    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:38.391392    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:38.391470    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:40.955418    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:40.982062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:41.015938    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.015938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:41.019996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:41.049917    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.049917    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:41.052925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:41.084946    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.084946    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:41.088068    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:41.120218    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.120297    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:41.123688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:41.152948    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.152948    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:41.156508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:41.183795    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.183795    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:41.187681    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:41.217097    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.217097    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:41.221130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:41.252354    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.252354    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:41.252354    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:41.252354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:41.345903    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:41.345903    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:41.345903    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:41.373149    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:41.373149    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:41.423553    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:41.423553    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:41.485144    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:41.485144    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.029139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:44.056384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:44.087995    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.088078    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:44.091865    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:44.118934    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.118934    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:44.122494    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:44.150822    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.150864    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:44.154454    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:44.183401    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.183401    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:44.187086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:44.214588    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.214644    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:44.217896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:44.249548    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.249548    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:44.253290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:44.281230    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.281230    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:44.284996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:44.314362    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.314426    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:44.314426    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:44.314426    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:44.378166    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:44.378166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.420024    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:44.420024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:44.510942    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:44.510942    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:44.510942    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:44.539432    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:44.539482    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:47.095962    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:47.121976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:47.155042    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.155042    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:47.159040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:47.188768    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.188768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:47.192847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:47.220500    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.220500    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:47.224299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:47.252483    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.252483    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:47.256264    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:47.285852    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.285852    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:47.290573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:47.319383    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.319450    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:47.323007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:47.353203    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.353203    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:47.357241    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:47.385498    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.385498    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:47.385498    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:47.385498    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:47.449686    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:47.449686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:47.490407    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:47.490407    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:47.577868    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:47.577868    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:47.577868    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:47.604652    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:47.604652    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:50.157279    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:50.184328    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:50.218852    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.218852    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:50.222438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:50.250551    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.250571    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:50.254169    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:50.285371    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.285424    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:50.289741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:50.320093    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.320093    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:50.323845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:50.357038    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.357084    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:50.360291    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:50.389753    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.389829    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:50.392859    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:50.423710    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.423710    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:50.427343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:50.454456    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.454456    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:50.454456    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:50.454456    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:50.516581    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:50.516581    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:50.555412    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:50.555412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:50.648402    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:50.648402    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:50.648402    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:50.673701    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:50.673701    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:53.230542    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:53.256707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:53.290781    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.290781    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:53.294254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:53.326261    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.326261    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:53.329838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:53.359630    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.359630    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:53.364896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:53.396046    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.396046    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:53.400120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:53.428713    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.428713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:53.432409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:53.462479    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.462479    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:53.467583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:53.495306    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.495306    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:53.499565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:53.530622    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.530622    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:53.530622    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:53.530622    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:53.593183    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:53.593183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:53.633807    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:53.633807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:53.721016    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:53.721016    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:53.721016    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:53.748333    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:53.748442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.315862    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:56.341452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:56.374032    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.374063    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:56.377843    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:56.408635    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.408698    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:56.412330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:56.442083    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.442083    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:56.445380    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:56.473679    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.473749    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:56.477263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:56.506107    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.506156    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:56.510975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:56.538958    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.539022    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:56.542581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:56.572303    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.572303    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:56.576375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:56.604073    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.604073    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:56.604073    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:56.604145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:56.641552    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:56.641552    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:56.734944    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:56.735002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:56.735046    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:56.770367    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:56.770412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.826378    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:56.826378    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.393300    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:59.417617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:59.452220    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.452220    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:59.456092    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:59.484787    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.484787    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:59.488348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:59.516670    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.516670    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:59.521214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:59.548048    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.548048    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:59.551862    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:59.576869    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.576869    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:59.581825    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:59.610579    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.610579    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:59.614523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:59.642507    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.642507    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:59.646397    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:59.675062    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.675062    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:59.675062    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:59.675062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.739704    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:59.739704    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:59.782363    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:59.782363    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:59.876076    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:59.876076    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:59.876076    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:59.903005    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:59.903005    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:02.456978    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:02.483895    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:02.516374    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.516374    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:02.520443    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:02.553066    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.553148    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:02.556844    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:02.585220    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.585220    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:02.589183    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:02.620655    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.620655    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:02.625389    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:02.659292    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.659369    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:02.662727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:02.690972    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.690972    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:02.694944    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:02.723751    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.723797    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:02.727357    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:02.764750    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.764750    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:02.764750    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:02.764750    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:02.834733    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:02.834733    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:02.873432    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:02.873432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:02.963503    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:02.963503    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:02.963503    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:02.992067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:02.992067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:05.547340    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:05.572946    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:05.605473    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.605473    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:05.609479    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:05.639072    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.639072    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:05.642702    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:05.674126    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.674174    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:05.678318    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:05.710378    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.710378    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:05.713988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:05.743263    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.743263    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:05.748802    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:05.777467    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.777467    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:05.781993    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:05.816147    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.816147    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:05.820044    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:05.849173    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.849173    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:05.849173    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:05.849173    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:05.937771    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:05.937771    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:05.937771    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:05.965110    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:05.965110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:06.012927    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:06.012927    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:06.076287    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:06.076287    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:08.621402    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:08.647297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:08.678598    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.678679    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:08.681866    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:08.710779    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.710856    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:08.714554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:08.745379    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.745379    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:08.750135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:08.785796    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.785840    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:08.791900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:08.823728    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.823778    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:08.827659    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:08.858652    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.858726    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:08.862304    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:08.893238    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.893287    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:08.896783    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:08.927578    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.927578    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:08.927578    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:08.927578    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:08.990752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:08.990752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:09.030509    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:09.030509    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:09.116112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:09.116629    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:09.116629    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:09.148307    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:09.148307    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:11.720341    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:11.750190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:11.784223    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.784247    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:11.789837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:11.819184    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.819184    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:11.824438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:11.852058    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.852058    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:11.857984    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:11.888391    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.888391    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:11.891707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:11.921973    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.921973    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:11.925426    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:11.953845    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.953845    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:11.957863    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:11.987150    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.987236    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:11.990921    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:12.018843    6576 logs.go:282] 0 containers: []
	W1205 08:08:12.018895    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:12.018895    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:12.018918    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:12.048523    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:12.048523    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:12.099490    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:12.099490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:12.163368    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:12.163368    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:12.204867    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:12.204867    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:12.290894    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:14.795945    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:14.821749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:14.851399    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.851399    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:14.855010    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:14.887370    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.887370    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:14.891117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:14.922139    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.922139    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:14.926245    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:14.954095    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.954095    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:14.959551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:14.987564    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.987564    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:14.991080    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:15.023941    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.023941    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:15.027344    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:15.056411    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.056474    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:15.059417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:15.092400    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.092400    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:15.092400    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:15.092400    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:15.119932    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:15.119932    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:15.169067    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:15.169067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:15.232603    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:15.232603    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:15.276106    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:15.276106    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:15.363421    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:17.870108    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:17.895889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:17.927528    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.927528    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:17.931166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:17.959105    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.959105    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:17.962846    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:17.994011    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.994011    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:17.998047    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:18.026606    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.026677    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:18.030234    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:18.061389    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.061389    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:18.065290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:18.096454    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.096454    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:18.100320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:18.129213    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.129213    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:18.133040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:18.160088    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.160111    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:18.160111    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:18.160111    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:18.221228    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:18.221228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:18.258886    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:18.258886    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:18.348416    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:18.348496    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:18.348525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:18.379855    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:18.379855    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:20.936239    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:20.959002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:20.990013    6576 logs.go:282] 0 containers: []
	W1205 08:08:20.990085    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:20.993773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:21.021884    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.021925    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:21.025964    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:21.054531    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.054531    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:21.058277    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:21.088997    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.089078    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:21.092631    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:21.121326    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.121360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:21.125135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:21.160429    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.160496    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:21.164226    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:21.192488    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.192557    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:21.196294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:21.228406    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.228445    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:21.228445    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:21.228495    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:21.291604    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:21.292600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:21.331218    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:21.331218    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:21.412454    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:21.412454    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:21.412454    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:21.441164    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:21.441229    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:23.994395    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:24.020275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:24.054682    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.054682    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:24.058674    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:24.089654    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.089654    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:24.093569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:24.123224    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.123224    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:24.127942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:24.155350    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.155350    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:24.159192    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:24.192652    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.192652    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:24.197194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:24.229851    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.229851    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:24.233957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:24.262158    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.262158    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:24.266478    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:24.297683    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.297766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:24.297766    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:24.297766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:24.388464    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:24.388464    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:24.388464    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:24.416764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:24.416764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:24.468678    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:24.469203    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:24.532678    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:24.532678    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.075175    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:27.104797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:27.137440    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.137440    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:27.141581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:27.171103    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.171126    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:27.174625    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:27.205068    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.205102    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:27.208711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:27.237765    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.237806    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:27.241719    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:27.269838    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.269838    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:27.273353    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:27.300835    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.300835    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:27.304633    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:27.333062    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.333062    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:27.338523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:27.366572    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.366572    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:27.366572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:27.366572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.402514    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:27.402514    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:27.499452    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:27.499452    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:27.499452    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:27.528089    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:27.528089    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:27.596881    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:27.596881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.168154    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:30.194986    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:30.228709    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.228709    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:30.233961    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:30.268256    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.268256    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:30.271667    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:30.300456    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.300519    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:30.303870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:30.335955    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.335955    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:30.339590    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:30.367829    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.367829    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:30.373123    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:30.401294    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.401327    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:30.404974    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:30.436526    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.436526    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:30.440246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:30.478544    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.478599    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:30.478599    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:30.478651    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.544716    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:30.544716    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:30.584496    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:30.584496    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:30.671308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:30.671352    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:30.671352    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:30.699029    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:30.699029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:33.251744    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:33.280500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:33.311912    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.311912    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:33.316407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:33.347966    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.347966    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:33.351341    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:33.386249    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.386249    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:33.389828    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:33.420571    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.420571    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:33.423584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:33.450599    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.450599    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:33.453949    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:33.488480    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.488480    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:33.492797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:33.523382    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.523382    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:33.526929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:33.561860    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.561860    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:33.561860    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:33.561860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:33.628425    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:33.628425    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:33.666453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:33.666453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:33.756872    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:33.756872    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:33.756872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:33.785780    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:33.785780    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:36.342322    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:36.368238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:36.399529    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.399529    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:36.402710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:36.430561    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.430561    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:36.434233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:36.461894    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.461894    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:36.466270    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:36.492354    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.492354    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:36.495668    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:36.526818    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.526818    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:36.530606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:36.564752    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.564752    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:36.569130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:36.598403    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.598403    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:36.603579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:36.635757    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.635757    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:36.635757    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:36.635757    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:36.702715    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:36.702715    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:36.740740    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:36.740740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:36.827779    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:36.827779    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:36.827779    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:36.855113    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:36.855148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.404078    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:39.428626    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:39.461540    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.461540    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:39.465369    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:39.497259    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.497368    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:39.501168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:39.532526    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.532526    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:39.537388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:39.570114    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.570114    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:39.574332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:39.607392    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.607392    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:39.611100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:39.640933    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.640933    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:39.644381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:39.673224    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.673224    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:39.678235    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:39.706766    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.706766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:39.706766    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:39.706766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:39.734527    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:39.734527    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.787138    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:39.787138    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:39.849637    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:39.849637    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:39.889331    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:39.889331    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:39.977390    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:42.481792    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:42.508550    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:42.541632    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.541632    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:42.545635    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:42.595829    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.595829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:42.601196    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:42.630888    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.630888    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:42.634929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:42.665451    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.665451    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:42.668581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:42.701244    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.701244    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:42.705368    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:42.737250    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.737250    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:42.740441    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:42.766622    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.766700    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:42.770278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:42.801486    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.801486    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:42.801486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:42.801486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:42.866794    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:42.866930    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:42.906819    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:42.906819    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:43.000226    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:43.000226    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:43.000226    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:43.027011    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:43.027011    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:45.586794    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:45.615024    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:45.642666    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.642666    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:45.646348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:45.675867    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.675867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:45.679650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:45.711785    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.711785    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:45.717449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:45.750065    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.750109    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:45.753406    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:45.782908    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.782908    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:45.786362    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:45.816309    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.816309    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:45.819889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:45.847629    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.847656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:45.850622    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:45.880676    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.880733    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:45.880759    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:45.880759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:45.943843    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:45.943843    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:45.984212    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:45.984212    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:46.071821    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:46.071821    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:46.071821    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:46.098280    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:46.098280    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:48.651285    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:48.676952    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:48.706696    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.706696    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:48.710427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:48.738766    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.738766    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:48.746145    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:48.773486    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.773486    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:48.778542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:48.805908    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.805908    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:48.809817    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:48.840360    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.840360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:48.843723    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:48.871560    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.871560    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:48.875316    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:48.903556    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.903556    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:48.908924    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:48.938455    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.938455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:48.938455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:48.938455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:49.001951    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:49.001951    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:49.042098    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:49.042098    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:49.131350    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:49.131350    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:49.131350    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:49.166759    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:49.166759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:51.724851    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:51.752650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:51.780528    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.780542    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:51.784422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:51.816577    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.816577    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:51.819989    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:51.849244    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.849244    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:51.853211    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:51.881159    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.881222    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:51.884831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:51.917237    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.917237    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:51.921202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:51.951018    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.951018    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:51.955222    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:51.982262    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.982262    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:51.986170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:52.013482    6576 logs.go:282] 0 containers: []
	W1205 08:08:52.013526    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:52.013564    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:52.013564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:52.050334    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:52.050334    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:52.144178    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:52.144178    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:52.144178    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:52.171135    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:52.171135    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:52.223993    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:52.223993    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:54.792613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:54.817042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:54.848768    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.848768    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:54.852580    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:54.881045    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.881045    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:54.885194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:54.915368    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.915368    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:54.919753    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:54.952592    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.952679    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:54.956477    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:54.989304    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.989357    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:54.992976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:55.025855    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.025855    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:55.029407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:55.059218    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.059290    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:55.063529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:55.092992    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.092992    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:55.092992    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:55.092992    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:55.201249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:55.201249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:55.201249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:55.228877    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:55.228907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:55.286872    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:55.286872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:55.357844    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:55.357844    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:57.912434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:57.938621    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:57.968927    6576 logs.go:282] 0 containers: []
	W1205 08:08:57.968927    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:57.975548    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:58.003200    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.003200    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:58.006983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:58.037886    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.037886    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:58.041594    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:58.072037    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.072037    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:58.076711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:58.118201    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.118201    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:58.122059    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:58.150468    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.150468    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:58.154554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:58.186009    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.186009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:58.189676    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:58.219204    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.219204    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:58.219204    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:58.219204    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:58.283572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:58.283572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:58.322291    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:58.322291    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:58.406023    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:58.406023    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:58.406023    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:58.434361    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:58.434881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:00.986031    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:01.012520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:01.041860    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.041860    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:01.045736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:01.074168    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.074168    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:01.081136    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:01.115160    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.115160    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:01.121214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:01.152200    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.152200    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:01.155786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:01.187849    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.187849    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:01.193651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:01.220927    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.220927    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:01.225251    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:01.262648    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.262648    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:01.266549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:01.298388    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.298388    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:01.298459    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:01.298491    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:01.389098    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:01.389126    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:01.389126    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:01.418232    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:01.418232    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:01.463083    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:01.463083    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:01.528159    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:01.528159    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.078505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:04.106462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:04.136412    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.136412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:04.139845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:04.168393    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.168465    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:04.171965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:04.203281    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.203281    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:04.207129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:04.235244    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.235244    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:04.239720    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:04.271746    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.271746    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:04.279903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:04.308486    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.308486    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:04.312482    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:04.341988    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.341988    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:04.345122    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:04.378152    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.378152    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:04.378152    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:04.378152    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:04.443403    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:04.443403    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.484661    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:04.484661    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:04.574793    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:04.574793    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:04.574793    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:04.606357    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:04.606357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.162554    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:07.194738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:07.227905    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.227977    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:07.232048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:07.262861    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.262861    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:07.266595    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:07.297184    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.297184    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:07.300873    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:07.331523    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.331523    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:07.335838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:07.367893    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.367893    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:07.371282    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:07.400934    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.400934    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:07.403928    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:07.431616    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.431616    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:07.435314    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:07.469043    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.469043    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:07.469043    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:07.469043    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:07.497832    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:07.497832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.547846    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:07.547846    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:07.611682    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:07.611682    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:07.651105    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:07.651105    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:07.741756    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.247138    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:10.275755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:10.311911    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.311911    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:10.317436    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:10.347243    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.347243    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:10.353296    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:10.384412    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.384412    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:10.389236    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:10.419505    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.419505    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:10.423688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:10.451213    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.451213    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:10.457390    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:10.485001    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.485001    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:10.488370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:10.519268    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.519268    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:10.524029    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:10.551544    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.551544    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:10.551544    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:10.551544    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:10.618971    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:10.618971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:10.657753    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:10.657753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:10.751422    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.751422    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:10.751422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:10.777901    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:10.778003    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.340867    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:13.373007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:13.404147    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.404191    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:13.408078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:13.440768    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.440768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:13.444748    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:13.474390    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.474390    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:13.478381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:13.508004    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.508057    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:13.511749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:13.543789    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.543789    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:13.547384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:13.576308    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.576377    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:13.579736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:13.609792    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.609792    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:13.613298    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:13.642091    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.642091    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:13.642091    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:13.642091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:13.671624    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:13.671686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.718995    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:13.718995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:13.782056    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:13.782056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:13.821453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:13.821453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:13.928916    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.433905    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:16.459887    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:16.496160    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.496160    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:16.499639    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:16.526877    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.526877    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:16.530750    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:16.560261    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.560261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:16.563991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:16.595914    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.595914    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:16.599869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:16.627694    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.627694    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:16.632403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:16.660769    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.660769    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:16.664194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:16.692707    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.692707    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:16.698036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:16.728749    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.728749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:16.728749    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:16.728749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:16.778953    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:16.779017    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:16.841091    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:16.841091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:16.881145    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:16.881145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:16.969295    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.969332    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:16.969362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:19.502757    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:19.529429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:19.557499    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.557499    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:19.561490    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:19.590127    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.590127    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:19.594042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:19.622382    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.622382    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:19.626026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:19.653513    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.653513    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:19.656672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:19.686153    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.686153    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:19.691297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:19.720831    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.720858    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:19.724786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:19.751107    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.751107    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:19.754979    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:19.782999    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.782999    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:19.782999    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:19.782999    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:19.844801    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:19.844801    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:19.884439    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:19.884439    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:19.977224    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:19.977224    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:19.977224    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:20.007404    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:20.007404    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:22.569427    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:22.596121    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:22.628181    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.628181    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:22.632086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:22.660848    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.660848    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:22.664755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:22.694182    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.694261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:22.698085    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:22.726532    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.726600    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:22.730354    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:22.757319    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.757355    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:22.760937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:22.792791    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.792791    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:22.799388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:22.841372    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.841372    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:22.845285    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:22.879377    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.879377    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:22.879377    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:22.879377    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:22.946156    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:22.946156    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:22.990461    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:22.990461    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:23.119453    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:23.119453    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:23.119453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:23.146199    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:23.147241    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:25.703191    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:25.728570    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:25.758884    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.758884    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:25.765071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:25.792957    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.792957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:25.796556    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:25.825466    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.825466    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:25.828728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:25.857451    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.857521    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:25.861306    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:25.887700    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.887700    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:25.891071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:25.920875    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.920875    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:25.924452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:25.952908    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.952952    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:25.956305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:25.987608    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.987608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:25.987608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:25.987608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:26.027162    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:26.027162    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:26.120245    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:26.120245    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:26.120245    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:26.147670    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:26.147697    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:26.198923    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:26.198963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:28.769076    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:28.797716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:28.829859    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.829898    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:28.833257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:28.864507    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.864507    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:28.868407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:28.898827    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.898827    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:28.902971    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:28.933087    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.933087    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:28.937063    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:28.964140    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.964140    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:28.968403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:28.997620    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.997620    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:29.001779    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:29.035745    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.035745    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:29.038757    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:29.068429    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.068429    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:29.068429    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:29.068429    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:29.124688    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:29.124688    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:29.188675    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:29.188675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:29.227887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:29.227887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:29.312828    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:29.312828    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:29.312828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:31.845911    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:31.878797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:31.916523    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.916523    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:31.919583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:31.950914    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.950976    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:31.954687    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:31.983555    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.983580    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:31.987603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:32.021007    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.021007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:32.025190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:32.056980    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.057033    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:32.060500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:32.104780    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.104780    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:32.108815    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:32.135429    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.135494    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:32.138969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:32.171260    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.171260    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:32.171260    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:32.171260    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:32.237752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:32.237752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:32.277887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:32.277887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:32.365810    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:32.365810    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:32.365810    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:32.392252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:32.392252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:34.943627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:34.969529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:35.010672    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.010672    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:35.015462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:35.048036    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.048036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:35.055991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:35.103005    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.103005    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:35.106890    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:35.137906    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.137906    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:35.141530    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:35.172625    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.172625    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:35.176175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:35.209474    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.209474    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:35.213175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:35.244787    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.244787    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:35.248557    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:35.275127    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.275158    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:35.275158    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:35.275158    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:35.334298    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:35.334298    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:35.373969    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:35.373969    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:35.459656    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:35.459755    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:35.459755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:35.489057    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:35.489057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:38.049404    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:38.073507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:38.101267    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.101337    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:38.104951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:38.134276    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.134276    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:38.139127    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:38.166437    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.166437    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:38.170518    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:38.199145    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.199145    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:38.202760    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:38.230466    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.230466    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:38.233640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:38.263867    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.263867    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:38.267542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:38.297791    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.297791    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:38.301874    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:38.332980    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.332980    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:38.332980    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:38.332980    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:38.396086    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:38.396086    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:38.433018    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:38.433018    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:38.516847    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:38.516847    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:38.516847    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:38.545985    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:38.545985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:41.097758    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:41.125607    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:41.156423    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.156423    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:41.159823    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:41.188324    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.188383    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:41.192299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:41.224751    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.224789    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:41.228655    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:41.257790    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.257790    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:41.261606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:41.292935    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.292999    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:41.296487    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:41.322728    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.322728    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:41.326980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:41.355569    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.355569    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:41.359412    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:41.388228    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.388228    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:41.388228    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:41.388228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:41.454094    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:41.454094    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:41.492536    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:41.492536    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:41.584848    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:41.584892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:41.584892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:41.611807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:41.611807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:44.169483    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:44.196254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:44.224412    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.224412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:44.229628    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:44.257724    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.257724    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:44.262355    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:44.289872    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.289926    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:44.293506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:44.321891    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.321891    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:44.325045    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:44.354424    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.354424    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:44.357980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:44.388960    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.388960    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:44.392224    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:44.424484    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.424484    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:44.427710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:44.458834    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.458834    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:44.458834    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:44.458834    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:44.523336    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:44.523336    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:44.560362    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:44.560362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:44.656711    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:44.656711    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:44.656711    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:44.682009    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:44.683010    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.243380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:47.270606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:47.302678    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.302720    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:47.305835    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:47.334169    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.334213    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:47.338162    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:47.370622    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.370693    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:47.374238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:47.406764    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.406787    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:47.410449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:47.439290    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.439332    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:47.442816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:47.475239    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.475239    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:47.479100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:47.510196    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.510196    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:47.513831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:47.543315    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.543378    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:47.543378    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:47.543411    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:47.577600    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:47.577600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.651517    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:47.651517    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:47.717530    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:47.717530    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:47.757989    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:47.757989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:47.848615    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:50.354473    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:50.381662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:50.410303    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.410303    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:50.416210    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:50.443479    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.443479    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:50.447606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:50.475214    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.475214    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:50.479409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:50.508984    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.508984    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:50.513185    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:50.544532    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.544532    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:50.548200    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:50.578350    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.578350    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:50.583137    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:50.615656    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.615656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:50.619983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:50.649117    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.649117    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:50.649117    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:50.649117    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:50.678837    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:50.678837    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:50.730963    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:50.730963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:50.797442    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:50.797442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:50.839051    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:50.840050    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:50.934073    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.440116    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:53.465957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:53.497390    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.497462    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:53.501077    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:53.529488    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.529488    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:53.536331    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:53.563367    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.563367    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:53.566361    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:53.596894    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.596894    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:53.600611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:53.630623    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.630623    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:53.634434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:53.664123    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.664123    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:53.668403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:53.697948    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.697948    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:53.701419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:53.730378    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.730462    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:53.730462    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:53.730462    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:53.798465    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:53.798465    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:53.841124    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:53.841124    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:53.935344    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.936318    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:53.936318    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:53.965040    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:53.965040    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:56.520907    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:56.551718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:56.584506    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.584506    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:56.588065    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:56.618214    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.618214    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:56.622199    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:56.650798    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.650798    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:56.654367    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:56.685409    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.685440    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:56.688781    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:56.719049    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.719163    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:56.722810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:56.753646    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.753646    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:56.757666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:56.793942    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.793942    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:56.798049    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:56.827315    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.827315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:56.827315    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:56.827315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:56.893213    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:56.893213    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:56.931234    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:56.931234    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:57.020142    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:57.020142    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:57.020142    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:57.048871    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:57.048871    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:59.606004    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:59.632524    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:59.662177    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.662177    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:59.666311    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:59.701152    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.701202    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:59.704398    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:59.733278    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.733278    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:59.738174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:59.769038    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.769038    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:59.773266    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:59.814259    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.814259    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:59.818330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:59.848066    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.848066    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:59.851684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:59.880029    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.880029    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:59.884457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:59.914608    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.914608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:59.914608    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:59.914608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:59.978490    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:59.978490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:00.018881    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:00.018881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:00.109744    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:00.109744    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:00.109744    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:00.137522    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:00.137591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:02.693722    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:02.718495    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:10:02.754864    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.754864    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:10:02.758547    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:10:02.795133    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.795231    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:10:02.798914    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:10:02.828115    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.828115    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:10:02.831263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:10:02.864241    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.864241    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:10:02.867861    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:10:02.895555    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.895555    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:10:02.901617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:10:02.931756    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.931756    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:10:02.935718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:10:02.964034    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.964034    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:10:02.968113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:10:03.000080    6576 logs.go:282] 0 containers: []
	W1205 08:10:03.000080    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:10:03.000080    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:03.000080    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:03.092694    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:03.094183    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:03.094183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:03.124625    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:03.124625    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:03.178920    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:10:03.178920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:10:03.237776    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:10:03.237776    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:05.783793    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:05.810874    6576 out.go:203] 
	W1205 08:10:05.812874    6576 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 08:10:05.812874    6576 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 08:10:05.812874    6576 out.go:285] * Related issues:
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 08:10:05.815880    6576 out.go:203] 
	
	
	==> Docker <==
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.859890520Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.859986630Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860002932Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860012733Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860021234Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860055437Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860095541Z" level=info msg="Initializing buildkit"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.987212646Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.997928393Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998072309Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998148017Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998246927Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:58:14 no-preload-104100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:58:15 no-preload-104100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:58:15 no-preload-104100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:13:25.111713   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:13:25.112587   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:13:25.115063   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:13:25.116159   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:13:25.116967   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.912373] CPU: 10 PID: 467231 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f59c4559b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f59c4559af6.
	[  +0.000001] RSP: 002b:00007fff7b401a80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.986945] CPU: 6 PID: 467375 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f68553b7b20
	[  +0.000010] Code: Unable to access opcode bytes at RIP 0x7f68553b7af6.
	[  +0.000001] RSP: 002b:00007ffe7761e510 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 08:13:25 up  3:47,  0 user,  load average: 0.36, 1.33, 2.74
	Linux no-preload-104100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:13:22 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:22 no-preload-104100 kubelet[17525]: E1205 08:13:22.270304   17525 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:13:22 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:13:22 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:13:22 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 05 08:13:22 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:22 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:23 no-preload-104100 kubelet[17551]: E1205 08:13:23.033675   17551 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:13:23 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:13:23 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:13:23 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1207.
	Dec 05 08:13:23 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:23 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:23 no-preload-104100 kubelet[17579]: E1205 08:13:23.788010   17579 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:13:23 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:13:23 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:13:24 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 05 08:13:24 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:24 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:24 no-preload-104100 kubelet[17592]: E1205 08:13:24.533107   17592 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:13:24 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:13:24 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:13:25 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1209.
	Dec 05 08:13:25 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:13:25 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 2 (619.3615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (13.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-042100 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (610.3165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-042100 -n newest-cni-042100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (602.7779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-042100 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (596.9246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-042100 -n newest-cni-042100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (570.7309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042100
helpers_test.go:243: (dbg) docker inspect newest-cni-042100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619",
	        "Created": "2025-12-05T07:52:58.091352749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T08:03:50.023797205Z",
	            "FinishedAt": "2025-12-05T08:03:46.631173784Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hosts",
	        "LogPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619-json.log",
	        "Name": "/newest-cni-042100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-042100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042100",
	                "Source": "/var/lib/docker/volumes/newest-cni-042100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042100",
	                "name.minikube.sigs.k8s.io": "newest-cni-042100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7425ef782ce126f539b7a23248f53aee42fe4667088eea6cf367858b569563e9",
	            "SandboxKey": "/var/run/docker/netns/7425ef782ce1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62708"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62709"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62710"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62711"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62712"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "174359b7b50b3bec7b4847d3ab43850e80d128f01a95736675cb3ceba87aab04",
	                    "EndpointID": "5e8b48011f9a64464c884645b921403d03309228e61384410733ff99b4453af2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042100",
	                        "ee0c9d80d83a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (609.2533ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25
E1205 08:10:17.903067    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25: (1.6922674s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-218000 sudo systemctl status docker --all --full --no-pager          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;   │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat docker --no-pager                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo crio config                                               │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/docker/daemon.json                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo docker system info                                       │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p bridge-218000                                                                │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat cri-docker --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cri-dockerd --version                                    │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status containerd --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat containerd --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /lib/systemd/system/containerd.service               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/containerd/config.toml                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo containerd config dump                                   │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status crio --all --full --no-pager            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │                     │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat crio --no-pager                            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo crio config                                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p kubenet-218000                                                               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ image   │ newest-cni-042100 image list --format=json                                      │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ pause   │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ unpause │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	W1205 08:03:44.511207    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:46.513793    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	Log file created at: 2025/12/05 08:03:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:03:48.079593    6576 out.go:360] Setting OutFile to fd 1628 ...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.133685    6576 out.go:374] Setting ErrFile to fd 1512...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.149881    6576 out.go:368] Setting JSON to false
	I1205 08:03:48.152825    6576 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13085,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:03:48.152825    6576 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:03:48.159945    6576 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:03:48.164658    6576 notify.go:221] Checking for updates...
	I1205 08:03:48.167308    6576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:48.170547    6576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:03:48.173264    6576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:03:48.177277    6576 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:03:48.179134    6576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:03:48.182963    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:48.184223    6576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:03:48.306826    6576 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:03:48.310816    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.562528    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.540004205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.565521    6576 out.go:179] * Using the docker driver based on existing profile
	I1205 08:03:48.568528    6576 start.go:309] selected driver: docker
	I1205 08:03:48.568528    6576 start.go:927] validating driver "docker" against &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.568528    6576 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:03:48.621627    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.870676    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.852383077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.870676    6576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 08:03:48.870676    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:03:48.871676    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:03:48.871676    6576 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.874674    6576 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 08:03:48.876674    6576 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:03:48.879674    6576 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:03:48.881674    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:03:48.881674    6576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 08:03:48.924123    6576 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:48.965045    6576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:03:48.965045    6576 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 08:03:49.173795    6576 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:49.174041    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 08:03:49.174210    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 08:03:49.176070    6576 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:03:49.176070    6576 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:49.176610    6576 start.go:364] duration metric: took 539.2µs to acquireMachinesLock for "newest-cni-042100"
	I1205 08:03:49.176876    6576 start.go:96] Skipping create...Using existing machine configuration
	I1205 08:03:49.176954    6576 fix.go:54] fixHost starting: 
	I1205 08:03:49.185185    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:49.467905    6576 fix.go:112] recreateIfNeeded on newest-cni-042100: state=Stopped err=<nil>
	W1205 08:03:49.468085    6576 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 08:03:46.247259    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:48.745542    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:50.273234    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:48.514113    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:50.532984    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:53.014533    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:49.492567    6576 out.go:252] * Restarting existing docker container for "newest-cni-042100" ...
	I1205 08:03:49.497575    6576 cli_runner.go:164] Run: docker start newest-cni-042100
	I1205 08:03:50.779131    6576 cli_runner.go:217] Completed: docker start newest-cni-042100: (1.2815354s)
	I1205 08:03:50.788112    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:51.139299    6576 kic.go:430] container "newest-cni-042100" state is running.
	I1205 08:03:51.164376    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:51.273747    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:51.276892    6576 machine.go:94] provisionDockerMachine start ...
	I1205 08:03:51.284394    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:51.396042    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:51.397040    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:51.397040    6576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:03:51.400042    6576 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:03:52.385305    6576 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.385658    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 08:03:52.385720    6576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.211458s
	I1205 08:03:52.385800    6576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 08:03:52.435659    6576 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.435659    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 08:03:52.435659    6576 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2613971s
	I1205 08:03:52.435659    6576 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 08:03:52.467883    6576 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.468216    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 08:03:52.468216    6576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.2939732s
	I1205 08:03:52.468216    6576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 08:03:52.472465    6576 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.472465    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 08:03:52.472465    6576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2982024s
	I1205 08:03:52.472465    6576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 08:03:52.472991    6576 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.473088    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 08:03:52.473088    6576 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2988253s
	I1205 08:03:52.473088    6576 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 08:03:52.478918    6576 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.479537    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 08:03:52.479537    6576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3052743s
	I1205 08:03:52.479537    6576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 08:03:52.488107    6576 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.489284    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 08:03:52.489284    6576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.3150206s
	I1205 08:03:52.489284    6576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 08:03:52.587256    6576 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.588098    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 08:03:52.588098    6576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.413907s
	I1205 08:03:52.588098    6576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 08:03:52.588098    6576 cache.go:87] Successfully saved all images to host disk.
	W1205 08:03:50.818460    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	I1205 08:03:53.244351    4412 pod_ready.go:94] pod "coredns-66bc5c9577-zrgxp" is "Ready"
	I1205 08:03:53.244351    4412 pod_ready.go:86] duration metric: took 21.0105368s for pod "coredns-66bc5c9577-zrgxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.250834    4412 pod_ready.go:83] waiting for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.262503    4412 pod_ready.go:94] pod "etcd-bridge-218000" is "Ready"
	I1205 08:03:53.262503    4412 pod_ready.go:86] duration metric: took 11.6685ms for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.271087    4412 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.281426    4412 pod_ready.go:94] pod "kube-apiserver-bridge-218000" is "Ready"
	I1205 08:03:53.281426    4412 pod_ready.go:86] duration metric: took 10.3388ms for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.286385    4412 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.438718    4412 pod_ready.go:94] pod "kube-controller-manager-bridge-218000" is "Ready"
	I1205 08:03:53.438718    4412 pod_ready.go:86] duration metric: took 152.3311ms for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.641268    4412 pod_ready.go:83] waiting for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.039664    4412 pod_ready.go:94] pod "kube-proxy-8r4gs" is "Ready"
	I1205 08:03:54.039664    4412 pod_ready.go:86] duration metric: took 398.3895ms for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.241161    4412 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:94] pod "kube-scheduler-bridge-218000" is "Ready"
	I1205 08:03:54.641085    4412 pod_ready.go:86] duration metric: took 399.9175ms for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:40] duration metric: took 32.4419039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:54.749081    4412 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:03:54.754768    4412 out.go:179] * Done! kubectl is now configured to use "bridge-218000" cluster and "default" namespace by default
	W1205 08:03:55.516894    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:58.012284    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:54.578463    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.578463    6576 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 08:03:54.583153    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.645702    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.646148    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.646193    6576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 08:03:54.866524    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.872867    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.933417    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.934199    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.934272    6576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:03:55.129977    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:55.129977    6576 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:03:55.129977    6576 ubuntu.go:190] setting up certificates
	I1205 08:03:55.129977    6576 provision.go:84] configureAuth start
	I1205 08:03:55.133735    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:55.190185    6576 provision.go:143] copyHostCerts
	I1205 08:03:55.190185    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:03:55.190185    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:03:55.190984    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:03:55.191986    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:03:55.191986    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:03:55.192251    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:03:55.193178    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:03:55.193178    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:03:55.193462    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:03:55.194234    6576 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 08:03:55.277216    6576 provision.go:177] copyRemoteCerts
	I1205 08:03:55.282373    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:03:55.285821    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.350220    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:55.476652    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:03:55.511250    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:03:55.546706    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 08:03:55.583614    6576 provision.go:87] duration metric: took 453.6304ms to configureAuth
	I1205 08:03:55.583614    6576 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:03:55.585275    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:55.589206    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.651189    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.652212    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.652246    6576 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:03:55.836329    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:03:55.837449    6576 ubuntu.go:71] root file system type: overlay
	I1205 08:03:55.837646    6576 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:03:55.841558    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.910453    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.911069    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.911069    6576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:03:56.123635    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:03:56.128031    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.191540    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:56.191765    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:56.191765    6576 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:03:56.396364    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:56.396364    6576 machine.go:97] duration metric: took 5.1193899s to provisionDockerMachine
	I1205 08:03:56.396364    6576 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 08:03:56.396897    6576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:03:56.402233    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:03:56.406223    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.460168    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.609105    6576 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:03:56.617925    6576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:03:56.617925    6576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:03:56.618732    6576 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:03:56.623542    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:03:56.637899    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:03:56.671787    6576 start.go:296] duration metric: took 274.8468ms for postStartSetup
	I1205 08:03:56.675921    6576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:03:56.678948    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.735289    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.884826    6576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:03:56.893835    6576 fix.go:56] duration metric: took 7.7168367s for fixHost
	I1205 08:03:56.893835    6576 start.go:83] releasing machines lock for "newest-cni-042100", held for 7.7169474s
	I1205 08:03:56.896826    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:56.959384    6576 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:03:56.965413    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.966255    6576 ssh_runner.go:195] Run: cat /version.json
	I1205 08:03:56.973872    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:57.022198    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:57.026201    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	W1205 08:03:57.148711    6576 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:03:57.162212    6576 ssh_runner.go:195] Run: systemctl --version
	I1205 08:03:57.181097    6576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:03:57.193288    6576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:03:57.197753    6576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:03:57.214357    6576 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 08:03:57.214357    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.214357    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.214357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:57.242461    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:03:57.262818    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 08:03:57.264705    6576 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:03:57.264749    6576 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:03:57.282712    6576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:03:57.286891    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:57.310466    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.333091    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:57.356105    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.377603    6576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:57.401090    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:57.423330    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:57.445407    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:57.472206    6576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:57.488210    6576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:57.505210    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:57.657790    6576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:57.802417    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.802417    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.807146    6576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:57.832467    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.857712    6576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:57.930272    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.960276    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:57.984286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:58.017277    6576 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:58.032288    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:58.048281    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:03:58.077282    6576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:58.275290    6576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:58.457293    6576 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:58.457293    6576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:58.486286    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:58.509287    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:58.648318    6576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:04:00.173930    6576 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5255881s)
	I1205 08:04:00.177929    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:04:00.201541    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:04:00.228851    6576 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 08:04:00.259044    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:00.283032    6576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:04:00.429299    6576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:04:00.593446    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.738544    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:04:00.766865    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:04:00.791407    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.930315    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:04:01.041317    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:01.059628    6576 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:04:01.064630    6576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:04:01.072635    6576 start.go:564] Will wait 60s for crictl version
	I1205 08:04:01.076636    6576 ssh_runner.go:195] Run: which crictl
	I1205 08:04:01.090615    6576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:04:01.132099    6576 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:04:01.136068    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.182106    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.227459    6576 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 08:04:01.231071    6576 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 08:04:01.375969    6576 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:04:01.379962    6576 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:04:01.387350    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.408320    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:01.468320    6576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 08:04:00.335905    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:00.512126    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:04:03.018493    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:04:01.471323    6576 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:04:01.471323    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:04:01.475324    6576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:04:01.511342    6576 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:04:01.512362    6576 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:04:01.512362    6576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 08:04:01.512362    6576 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:04:01.515327    6576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:04:01.600646    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:04:01.600646    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:04:01.600646    6576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 08:04:01.600646    6576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:04:01.600646    6576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:04:01.604645    6576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 08:04:01.617663    6576 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:04:01.621646    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:04:01.634708    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 08:04:01.659457    6576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 08:04:01.681516    6576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 08:04:01.709549    6576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:04:01.717165    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.737936    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:01.886462    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:01.908845    6576 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 08:04:01.908845    6576 certs.go:195] generating shared ca certs ...
	I1205 08:04:01.908845    6576 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:01.910250    6576 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:04:01.910428    6576 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:04:01.910428    6576 certs.go:257] generating profile certs ...
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 08:04:01.911645    6576 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 08:04:01.912393    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:04:01.912708    6576 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:04:01.912818    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:04:01.913766    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:04:01.914884    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:04:01.946745    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:04:01.978670    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:04:02.020771    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:04:02.052789    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 08:04:02.083785    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:04:02.111686    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:04:02.138106    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:04:02.167957    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:04:02.197699    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:04:02.228974    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:04:02.258542    6576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:04:02.283541    6576 ssh_runner.go:195] Run: openssl version
	I1205 08:04:02.296537    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.312534    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:04:02.327543    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.334539    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.339544    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.392223    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:04:02.408977    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.424981    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:04:02.439981    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.446982    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.451985    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.500175    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:04:02.518368    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.537597    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:04:02.555653    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.562656    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.566659    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.617005    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:04:02.635329    6576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:04:02.649383    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 08:04:02.697863    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 08:04:02.747535    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 08:04:02.802236    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 08:04:02.853222    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 08:04:02.901642    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 08:04:02.946962    6576 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:04:02.951256    6576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:04:02.986478    6576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:04:02.999955    6576 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 08:04:02.999955    6576 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 08:04:03.003999    6576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 08:04:03.019291    6576 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 08:04:03.022819    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.083372    6576 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.084185    6576 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042100" cluster setting kubeconfig missing "newest-cni-042100" context setting]
	I1205 08:04:03.084741    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.109144    6576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 08:04:03.128232    6576 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 08:04:03.138905    6576 kubeadm.go:602] duration metric: took 138.9481ms to restartPrimaryControlPlane
	I1205 08:04:03.138905    6576 kubeadm.go:403] duration metric: took 191.9404ms to StartCluster
	I1205 08:04:03.138905    6576 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.138905    6576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.141698    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.142419    6576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:04:03.142419    6576 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:04:03.142419    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:04:03.163290    6576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting dashboard=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon dashboard=true in "newest-cni-042100"
	W1205 08:04:03.163290    6576 addons.go:248] addon dashboard should already be in state true
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.192363    6576 out.go:179] * Verifying Kubernetes components...
	I1205 08:04:03.249622    6576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-042100"
	I1205 08:04:03.250609    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.257607    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.258609    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:03.261608    6576 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 08:04:03.264610    6576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:04:03.309607    6576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.309607    6576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:04:03.312609    6576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.312609    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:04:03.312609    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.315610    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.318607    6576 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 08:04:03.510751    7752 pod_ready.go:94] pod "coredns-66bc5c9577-gsfxl" is "Ready"
	I1205 08:04:03.510751    7752 pod_ready.go:86] duration metric: took 25.5102081s for pod "coredns-66bc5c9577-gsfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.517746    7752 pod_ready.go:83] waiting for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.529764    7752 pod_ready.go:94] pod "etcd-kubenet-218000" is "Ready"
	I1205 08:04:03.529764    7752 pod_ready.go:86] duration metric: took 12.0185ms for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.535749    7752 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.544756    7752 pod_ready.go:94] pod "kube-apiserver-kubenet-218000" is "Ready"
	I1205 08:04:03.544756    7752 pod_ready.go:86] duration metric: took 9.007ms for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.549745    7752 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.706418    7752 pod_ready.go:94] pod "kube-controller-manager-kubenet-218000" is "Ready"
	I1205 08:04:03.706418    7752 pod_ready.go:86] duration metric: took 156.6708ms for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.906896    7752 pod_ready.go:83] waiting for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.305526    7752 pod_ready.go:94] pod "kube-proxy-l9mnz" is "Ready"
	I1205 08:04:04.305526    7752 pod_ready.go:86] duration metric: took 398.0934ms for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.506453    7752 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:94] pod "kube-scheduler-kubenet-218000" is "Ready"
	I1205 08:04:04.908413    7752 pod_ready.go:86] duration metric: took 401.8894ms for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:40] duration metric: took 37.4190345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:04:05.004707    7752 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:04:05.007705    7752 out.go:179] * Done! kubectl is now configured to use "kubenet-218000" cluster and "default" namespace by default
	I1205 08:04:03.344609    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 08:04:03.344609    6576 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 08:04:03.353008    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.373762    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.389748    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.415749    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.454747    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:03.481745    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.544756    6576 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:04:03.550761    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:03.552751    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.556766    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 08:04:03.556766    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 08:04:03.561743    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.627813    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 08:04:03.627923    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 08:04:03.654463    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 08:04:03.654463    6576 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 08:04:03.731575    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 08:04:03.731654    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1205 08:04:03.751356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.751356    6576 retry.go:31] will retry after 148.467646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.754346    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	W1205 08:04:03.755354    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.755354    6576 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 08:04:03.755354    6576 retry.go:31] will retry after 202.130528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.774491    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 08:04:03.774491    6576 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 08:04:03.793803    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 08:04:03.793803    6576 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 08:04:03.828295    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 08:04:03.828351    6576 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 08:04:03.851355    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.851355    6576 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 08:04:03.876402    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.905217    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:03.957742    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.957742    6576 retry.go:31] will retry after 291.655688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.962256    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:03.992521    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.992521    6576 retry.go:31] will retry after 561.792628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.049441    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:04.057481    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.057556    6576 retry.go:31] will retry after 288.112081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.254701    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.343216    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.343216    6576 retry.go:31] will retry after 359.979776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.350062    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:04.431174    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.431174    6576 retry.go:31] will retry after 483.679942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.549772    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:04.559147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:04.642871    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.642871    6576 retry.go:31] will retry after 528.970083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.708123    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.787283    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.787283    6576 retry.go:31] will retry after 459.684582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.919229    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.004707    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.004707    6576 retry.go:31] will retry after 831.823948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.050298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.177969    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:05.252148    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:05.268807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.268914    6576 retry.go:31] will retry after 1.219301827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:05.381615    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.381684    6576 retry.go:31] will retry after 1.003502336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.548840    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.841493    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.945714    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.945714    6576 retry.go:31] will retry after 1.344373684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.051495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:06.390219    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:06.476859    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.476859    6576 retry.go:31] will retry after 916.677354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.493513    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:06.550586    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:06.586142    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.586142    6576 retry.go:31] will retry after 814.667109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.049968    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:07.295279    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:07.385161    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.385225    6576 retry.go:31] will retry after 2.309719888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.397737    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:07.404241    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.24760459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.229405263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.550637    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.050329    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:10.375252    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:04:08.551330    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.052416    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.549628    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.699045    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:09.722067    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:09.740066    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:09.854063    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.854063    6576 retry.go:31] will retry after 1.718952919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.926061    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.926061    6576 retry.go:31] will retry after 2.401961347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.960056    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.961057    6576 retry.go:31] will retry after 3.751594778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:10.049061    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:10.549298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.049797    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.550139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.577133    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:11.663155    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:11.663155    6576 retry.go:31] will retry after 4.120114825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.049572    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:12.333014    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:12.419653    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.419653    6576 retry.go:31] will retry after 2.740389125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.549673    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.050128    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.549901    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.717839    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:13.806807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:13.806807    6576 retry.go:31] will retry after 4.752661147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:14.050521    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:14.551720    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.050682    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.165926    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:15.256271    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.256271    6576 retry.go:31] will retry after 4.534312748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.549805    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.787818    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:15.865098    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.865628    6576 retry.go:31] will retry after 5.383695211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:16.050434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:16.549442    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.049923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.550083    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.049667    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:19.104488    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 08:04:19.104793    4560 node_ready.go:38] duration metric: took 6m0.001013s for node "no-preload-104100" to be "Ready" ...
	I1205 08:04:19.107356    4560 out.go:203] 
	W1205 08:04:19.110511    4560 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 08:04:19.110554    4560 out.go:285] * 
	W1205 08:04:19.112383    4560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:04:19.116573    4560 out.go:203] 
	I1205 08:04:18.551343    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.565349    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:18.647263    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:18.647263    6576 retry.go:31] will retry after 8.382323881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.050424    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.796280    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:19.904265    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.904265    6576 retry.go:31] will retry after 5.117792571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:20.052293    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:20.550380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.052677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.255736    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:21.356356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.356356    6576 retry.go:31] will retry after 8.875197166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.550333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.049310    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.550338    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.050244    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.551039    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.050874    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.550399    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:25.027043    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:25.050989    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:25.159593    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.159593    6576 retry.go:31] will retry after 7.802785807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.553440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.050359    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.551986    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:27.034606    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:27.050924    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:27.141503    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.141551    6576 retry.go:31] will retry after 13.674183061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.553694    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.049210    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.550842    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.051091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.549571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.051474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.237147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:30.345143    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.345143    6576 retry.go:31] will retry after 18.684554823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.552505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.050974    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.550315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.053025    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.550841    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.967139    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:33.050008    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:33.074001    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.074001    6576 retry.go:31] will retry after 21.457353412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.550375    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.053598    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.050034    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.050947    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.552933    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.049827    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.551205    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.050234    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.552156    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.050748    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.549737    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.050549    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.550949    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.819283    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:40.946292    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:40.946292    6576 retry.go:31] will retry after 18.180546633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:41.051295    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:41.551923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.051010    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.550802    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.050090    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.549595    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.050323    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.551060    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.050284    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.549318    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.049045    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.550390    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.050869    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.549920    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.050040    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:49.037573    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:49.050392    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:49.132808    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.132808    6576 retry.go:31] will retry after 12.282235903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.549952    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.052465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.550412    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.053026    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.551123    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.050959    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.550243    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.051085    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.550766    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.053585    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.537931    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:54.551106    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:54.662326    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:54.662326    6576 retry.go:31] will retry after 25.982171867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:55.050927    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:55.551197    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.049847    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.551717    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.050571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.552306    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.050495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.550960    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.050091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.133373    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:59.223117    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.223117    6576 retry.go:31] will retry after 23.551015037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.551231    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.047738    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.550465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.051875    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.420389    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:01.505728    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.505728    6576 retry.go:31] will retry after 17.206812229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.551821    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.051028    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.550994    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.051369    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.550326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:03.585938    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.585938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:03.590134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:03.617879    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.617879    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:03.624332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:03.651940    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.651940    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:03.656120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:03.685733    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.685733    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:03.690030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:03.719658    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.719713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:03.723576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:03.755797    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.755797    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:03.760966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:03.789461    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.789461    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:03.793178    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:03.823147    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.823147    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:03.823147    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:03.823679    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:03.890829    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:03.890829    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:03.937573    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:03.937573    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:04.028268    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:04.028268    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:04.028268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:04.054265    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:04.054265    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.624597    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:06.650113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:06.681568    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.682088    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:06.685527    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:06.715181    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.715181    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:06.718768    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:06.748649    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.748692    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:06.752313    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:06.783519    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.783582    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:06.787257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:06.817858    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.817858    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:06.821703    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:06.854241    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.854241    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:06.857773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:06.888901    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.888901    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:06.894071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:06.923675    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.923675    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:06.923675    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:06.923675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.974113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:06.974166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:07.037689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:07.037689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:07.080588    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:07.080588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:07.171034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:07.171067    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:07.171067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:09.706054    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:09.732108    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:09.767273    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.767300    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:09.770837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:09.802479    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.802550    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:09.806320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:09.835537    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.835537    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:09.841566    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:09.874578    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.874578    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:09.878148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:09.906942    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.907017    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:09.910154    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:09.941197    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.941197    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:09.945133    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:09.974591    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.974591    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:09.978698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:10.007749    6576 logs.go:282] 0 containers: []
	W1205 08:05:10.007749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:10.007749    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:10.007749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:10.044236    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:10.044236    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:10.130995    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:10.130995    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:10.130995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:10.158359    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:10.158945    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:10.209053    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:10.209053    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:12.782787    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:12.809043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:12.839958    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.839958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:12.845180    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:12.876657    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.876720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:12.880739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:12.908227    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.908227    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:12.912011    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:12.942400    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.942449    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:12.945431    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:12.973155    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.973155    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:12.976739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:13.004259    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.004259    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:13.008151    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:13.038225    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.038225    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:13.041692    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:13.070500    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.070500    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:13.070500    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:13.070500    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:13.134608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:13.134608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:13.173994    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:13.173994    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:13.270602    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:13.270665    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:13.270665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:13.299297    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:13.299297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:15.870600    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:15.895506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:15.927013    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.927013    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:15.930717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:15.959875    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.959941    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:15.963955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:15.992862    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.992862    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:15.996303    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:16.023966    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.023966    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:16.027786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:16.058698    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.058698    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:16.065246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:16.094826    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.094826    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:16.098650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:16.144774    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.144820    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:16.148422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:16.177296    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.177296    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:16.177296    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:16.177296    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:16.242225    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:16.242225    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:16.283778    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:16.283778    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:16.378623    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:16.378623    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:16.378623    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:16.408736    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:16.409256    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:18.719251    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:18.815541    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:18.815541    6576 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:18.959261    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:18.983847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:19.016048    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.016048    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:19.022913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:19.054693    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.054752    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:19.058555    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:19.087342    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.087342    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:19.090772    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:19.118199    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.118199    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:19.121567    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:19.151346    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.151346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:19.155305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:19.186521    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.186611    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:19.190219    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:19.220730    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.220730    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:19.225064    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:19.255890    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.256013    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:19.256013    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:19.256013    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:19.324476    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:19.324476    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:19.362802    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:19.362802    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:19.443537    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:19.444546    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:19.444546    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:19.474585    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:19.474647    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:20.651307    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:20.735190    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:20.735294    6576 retry.go:31] will retry after 27.405422909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.034778    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:22.060808    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:22.093037    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.093111    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:22.097193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:22.124988    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.125036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:22.128496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:22.157896    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.157947    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:22.161826    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:22.190808    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.190839    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:22.194900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:22.227226    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.227346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:22.230966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:22.260811    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.260861    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:22.264784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:22.295222    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.295331    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:22.302135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:22.343045    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.343116    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:22.343116    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:22.343116    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:22.394026    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:22.394026    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:22.457078    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:22.457078    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:22.498385    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:22.498434    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:22.581112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:22.581112    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:22.581112    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:22.780060    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:05:22.859804    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.859804    6576 retry.go:31] will retry after 21.036491608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:25.113006    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:25.148820    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:25.186604    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.186604    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:25.191401    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:25.223786    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.223867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:25.227359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:25.262253    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.262310    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:25.266030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:25.298397    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.298433    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:25.303771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:25.334112    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.334112    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:25.338565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:25.370125    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.370206    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:25.374513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:25.406130    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.406219    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:25.410417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:25.442663    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.442742    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:25.442742    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:25.442742    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:25.479786    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:25.479786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:25.573308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:25.573308    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:25.573308    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:25.599667    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:25.599667    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:25.650617    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:25.650617    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.218354    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:28.243705    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:28.279022    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.279022    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:28.283525    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:28.313798    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.313798    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:28.318172    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:28.347700    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.347700    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:28.351701    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:28.381257    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.381341    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:28.384917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:28.416041    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.416041    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:28.419541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:28.447349    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.447349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:28.451684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:28.479275    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.479307    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:28.483095    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:28.511115    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.511187    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:28.511187    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:28.511237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.574706    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:28.574706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:28.615541    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:28.615541    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:28.709604    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:28.709604    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:28.709604    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:28.738815    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:28.738815    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.300476    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:31.328202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:31.357921    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.357958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:31.361905    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:31.390844    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.390926    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:31.395488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:31.426488    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.426570    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:31.430048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:31.461632    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.461687    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:31.465105    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:31.492594    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.492657    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:31.496042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:31.523806    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.523834    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:31.527758    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:31.557959    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.558020    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:31.561776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:31.588451    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.588485    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:31.588513    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:31.588535    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:31.675984    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:31.675984    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:31.675984    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:31.706483    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:31.706567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.753154    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:31.753677    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:31.813379    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:31.813379    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.359731    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:34.386737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:34.416273    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.416306    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:34.419220    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:34.452145    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.452661    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:34.456139    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:34.486541    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.486593    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:34.489738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:34.520642    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.520642    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:34.524007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:34.556848    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.556848    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:34.560551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:34.589976    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.589976    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:34.594061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:34.623871    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.623871    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:34.627661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:34.655428    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.655428    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:34.655428    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:34.655428    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.693248    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:34.693248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:34.782095    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:34.782095    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:34.782095    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:34.809243    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:34.809243    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:34.859486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:34.859486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.427533    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:37.454695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:37.485702    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.485702    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:37.489329    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:37.522074    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.522074    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:37.525283    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:37.555534    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.555534    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:37.559473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:37.589923    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.589923    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:37.593340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:37.625230    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.625230    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:37.628764    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:37.658722    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.658722    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:37.661870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:37.693003    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.693003    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:37.696992    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:37.726216    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.726286    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:37.726286    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:37.726333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.791305    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:37.791305    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:37.829600    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:37.829600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:37.920892    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:37.920892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:37.920892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:37.947989    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:37.947989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:40.501988    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:40.527784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:40.563590    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.563590    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:40.567375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:40.598332    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.598332    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:40.602019    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:40.629289    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.629289    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:40.633378    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:40.660574    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.660630    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:40.664275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:40.691063    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.691063    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:40.694694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:40.723611    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.723667    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:40.726975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:40.755155    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.755155    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:40.759134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:40.793723    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.793723    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:40.793723    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:40.793723    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:40.831198    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:40.831198    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:40.925587    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:40.925587    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:40.925587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:40.954081    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:40.954114    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:41.007048    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:41.007096    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:43.582160    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:43.607539    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:43.638277    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.638277    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:43.642375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:43.675099    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.675099    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:43.678089    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:43.706803    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.706803    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:43.713114    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:43.740522    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.740522    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:43.744411    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:43.773724    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.773780    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:43.777763    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:43.803962    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.803962    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:43.807698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:43.839559    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.839559    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:43.843918    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:43.876174    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.876252    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:43.876252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:43.876252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:43.902671    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:05:43.934973    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:43.934973    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 08:05:43.999146    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:43.999146    6576 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:44.032735    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:44.033740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:44.075384    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:44.075384    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:44.157223    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:44.157223    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:44.157223    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:46.691333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:46.717072    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:46.748595    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.748595    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:46.752218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:46.780374    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.780374    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:46.783922    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:46.815066    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.815066    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:46.818942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:46.847510    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.847563    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:46.851012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:46.883362    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.883465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:46.886941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:46.916379    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.916451    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:46.920641    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:46.949114    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.949114    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:46.953549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:46.983164    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.983164    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:46.983164    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:46.983164    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:47.022255    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:47.022255    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:47.111784    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:47.111860    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:47.111860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:47.138559    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:47.138559    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:47.188823    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:47.189346    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:48.147422    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:48.239875    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:48.239875    6576 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:48.242898    6576 out.go:179] * Enabled addons: 
	I1205 08:05:48.245836    6576 addons.go:530] duration metric: took 1m45.1017438s for enable addons: enabled=[]
	I1205 08:05:49.757493    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:49.785573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:49.818757    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.818757    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:49.822359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:49.849919    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.849919    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:49.853892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:49.881451    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.881451    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:49.884508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:49.916549    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.916599    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:49.922025    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:49.955857    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.955857    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:49.959871    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:49.992747    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.992747    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:49.997745    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:50.027985    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.027985    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:50.032696    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:50.066315    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.066315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:50.066315    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:50.066315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:50.162764    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:50.162764    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:50.162764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:50.190807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:50.190807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:50.244357    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:50.244357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:50.306832    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:50.306832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:52.850828    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:52.881404    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:52.914164    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.914164    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:52.919056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:52.946339    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.946339    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:52.950249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:52.977159    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.977159    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:52.981587    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:53.011126    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.011126    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:53.016170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:53.050900    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.050900    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:53.055929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:53.086492    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.086492    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:53.091422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:53.123587    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.123587    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:53.126586    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:53.155525    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.155525    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:53.155525    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:53.155525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:53.220198    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:53.221197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:53.261683    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:53.261683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:53.355432    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:53.355432    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:53.355432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:53.386521    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:53.386521    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:55.947613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:55.973795    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:56.007916    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.007916    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:56.011792    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:56.045094    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.045094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:56.048513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:56.082501    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.082501    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:56.086603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:56.116918    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.117005    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:56.120916    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:56.150716    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.150716    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:56.154101    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:56.186882    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.186882    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:56.190500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:56.223741    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.223741    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:56.227290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:56.255902    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.255902    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:56.255902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:56.255902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:56.285180    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:56.285180    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:56.333650    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:56.333650    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:56.393332    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:56.393332    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:56.432841    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:56.432841    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:56.521419    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.025923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:59.056473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:59.091893    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.091909    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:59.095650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:59.128079    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.128185    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:59.131611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:59.159655    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.159655    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:59.163348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:59.192422    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.192422    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:59.196339    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:59.226737    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.226737    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:59.230776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:59.258194    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.258194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:59.261784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:59.292592    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.292592    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:59.296370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:59.323764    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.323764    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:59.323764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:59.323764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:59.375689    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:59.376207    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:59.440586    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:59.440586    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:59.479856    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:59.479856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:59.578161    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.578161    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:59.578161    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.111153    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:02.137611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:02.172231    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.172231    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:02.176271    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:02.208274    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.208274    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:02.211990    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:02.244184    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.244245    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:02.247661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:02.278388    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.278388    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:02.282228    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:02.312290    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.312290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:02.316470    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:02.345487    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.345487    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:02.349444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:02.378305    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.378305    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:02.381923    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:02.409737    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.409737    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:02.409737    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:02.409737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:02.477029    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:02.477029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:02.517422    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:02.517422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:02.605249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:02.605249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:02.605249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.632767    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:02.632828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:05.196182    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:05.221488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:05.251281    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.251355    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:05.254854    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:05.284103    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.284103    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:05.288076    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:05.315552    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.315552    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:05.319409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:05.347664    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.347664    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:05.351387    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:05.382685    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.382685    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:05.386801    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:05.416816    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.416816    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:05.421471    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:05.451265    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.451350    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:05.455129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:05.486455    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.486455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:05.486455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:05.486455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:05.548252    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:05.548252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:05.586103    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:05.586103    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:05.689902    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:05.689902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:05.689902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:05.715463    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:05.715463    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:08.298546    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:08.325694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:08.358357    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.358427    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:08.362535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:08.393631    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.393631    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:08.397365    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:08.429162    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.429162    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:08.433444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:08.464672    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.464672    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:08.467810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:08.496450    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.496450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:08.499640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:08.526246    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.526246    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:08.530507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:08.558130    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.558130    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:08.561856    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:08.590753    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.590753    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:08.590753    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:08.590753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:08.656049    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:08.656049    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:08.697268    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:08.697268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:08.794510    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:08.794510    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:08.794510    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:08.839662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:08.839734    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:11.394677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:11.423727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:11.453346    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.453346    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:11.460955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:11.498834    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.498834    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:11.498834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:11.532657    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.532657    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:11.540987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:11.575759    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.575786    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:11.579561    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:11.612047    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.612102    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:11.615579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:11.644318    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.644370    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:11.648326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:11.678026    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.678026    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:11.681899    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:11.711631    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.711631    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:11.711631    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:11.711631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:11.772905    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:11.772905    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:11.814639    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:11.814639    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:11.905607    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:11.905657    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:11.905700    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:11.934717    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:11.935238    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:14.488836    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:14.512857    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:14.546571    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.546571    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:14.549903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:14.580887    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.580887    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:14.584967    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:14.630312    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.630312    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:14.633809    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:14.667373    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.667373    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:14.671026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:14.699813    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.699813    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:14.703177    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:14.734619    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.734619    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:14.739056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:14.769129    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.769129    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:14.773030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:14.803689    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.803689    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:14.803689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:14.803689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:14.841923    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:14.841923    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:14.932570    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:14.932570    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:14.932570    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:14.961067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:14.961591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:15.010912    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:15.010953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:17.575458    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:17.603741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:17.636367    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.636367    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:17.640529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:17.668380    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.668380    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:17.672111    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:17.700544    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.700544    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:17.704634    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:17.736823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.736823    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:17.741002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:17.770125    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.770125    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:17.775816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:17.812823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.812823    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:17.815683    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:17.844895    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.844895    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:17.849115    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:17.880706    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.880706    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:17.880706    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:17.880706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:17.969171    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:17.969171    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:17.969263    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:17.995396    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:17.995396    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:18.044466    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:18.044466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:18.105721    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:18.105721    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:20.651671    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:20.679273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:20.707727    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.707727    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:20.711373    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:20.741891    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.741891    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:20.746073    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:20.777260    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.777260    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:20.780520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:20.816982    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.816982    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:20.820520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:20.850461    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.850461    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:20.854205    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:20.882429    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.882429    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:20.886920    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:20.914179    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.914179    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:20.917831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:20.949708    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.949708    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:20.949708    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:20.949708    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:21.013967    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:21.013967    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:21.053946    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:21.053946    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:21.140482    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:21.141002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:21.141002    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:21.170239    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:21.170239    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:23.729627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:23.758686    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:23.791537    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.791594    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:23.796131    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:23.827894    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.827894    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:23.832419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:23.862718    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.862718    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:23.867837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:23.896272    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.896272    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:23.900193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:23.929016    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.929078    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:23.932778    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:23.962372    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.962447    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:23.966147    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:23.998472    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.998472    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:24.004351    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:24.033564    6576 logs.go:282] 0 containers: []
	W1205 08:06:24.033564    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:24.033564    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:24.033564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:24.099505    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:24.099505    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:24.139900    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:24.139900    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:24.233474    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:24.233474    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:24.233474    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:24.263408    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:24.263408    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:26.816321    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:26.841457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:26.872936    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.872992    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:26.876345    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:26.908512    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.908580    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:26.912736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:26.944068    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.944068    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:26.947603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:26.975323    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.975360    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:26.978941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:27.008708    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.008751    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:27.012371    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:27.044160    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.044225    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:27.047780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:27.078172    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.078172    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:27.081803    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:27.111287    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.111370    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:27.111370    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:27.111435    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:27.161265    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:27.161329    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:27.221473    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:27.221473    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:27.263907    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:27.263907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:27.357876    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:27.357876    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:27.357876    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:29.890252    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:29.916690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:29.946274    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.946274    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:29.950679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:29.979149    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.979149    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:29.982229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:30.010085    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.010085    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:30.014016    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:30.043254    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.043254    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:30.048048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:30.080613    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.080613    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:30.084300    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:30.114627    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.114627    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:30.118584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:30.147947    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.148009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:30.151166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:30.180743    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.180828    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:30.180828    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:30.180828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:30.244646    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:30.244646    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:30.286079    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:30.286079    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:30.376557    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:30.376557    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:30.376557    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:30.405737    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:30.405737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:32.958550    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:32.987728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:33.018308    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.018370    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:33.022062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:33.052435    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.052435    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:33.056434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:33.085355    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.085426    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:33.089343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:33.121676    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.121737    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:33.125504    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:33.157765    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.157765    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:33.161892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:33.191061    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.191061    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:33.194930    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:33.223173    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.223173    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:33.226650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:33.257481    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.257481    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:33.257481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:33.257481    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:33.301467    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:33.301467    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:33.389528    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:33.389528    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:33.389528    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:33.418631    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:33.418631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:33.465106    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:33.465185    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.034296    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:36.063459    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:36.095210    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.095210    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:36.098565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:36.127708    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.127786    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:36.131615    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:36.159964    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.159964    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:36.163771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:36.192604    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.192604    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:36.196679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:36.224877    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.224958    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:36.228553    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:36.258280    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.258280    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:36.261911    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:36.294140    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.294140    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:36.298273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:36.329657    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.329657    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:36.329657    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:36.329657    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:36.387784    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:36.387784    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.452385    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:36.452385    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:36.493394    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:36.493394    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:36.591485    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:36.591485    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:36.591567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.124474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:39.152578    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:39.183392    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.183392    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:39.187028    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:39.216193    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.216193    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:39.219743    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:39.251680    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.251759    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:39.255869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:39.283843    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.283843    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:39.287237    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:39.316021    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.316021    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:39.319015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:39.349194    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.349194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:39.352951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:39.403729    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.403729    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:39.411012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:39.442909    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.442909    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:39.442909    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:39.442909    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:39.509174    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:39.509174    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:39.550483    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:39.550483    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:39.650354    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:39.650354    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:39.650354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.676786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:39.676786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.228069    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:42.258786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:42.290791    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.290791    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:42.294739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:42.326094    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.326094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:42.329725    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:42.356052    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.356052    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:42.359752    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:42.390464    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.390464    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:42.393935    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:42.421882    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.421882    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:42.426609    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:42.457036    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.457036    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:42.460988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:42.486064    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.486064    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:42.491250    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:42.521748    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.521748    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:42.521748    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:42.521748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:42.551195    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:42.552197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.613626    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:42.613683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:42.678856    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:42.679856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:42.719297    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:42.719297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:42.811034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:45.316640    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:45.343574    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:45.372899    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.372899    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:45.376229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:45.408264    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.408264    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:45.412119    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:45.440697    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.440697    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:45.444501    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:45.471692    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.471727    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:45.475496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:45.508400    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.508450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:45.512541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:45.544177    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.544233    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:45.548858    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:45.579165    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.579165    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:45.582164    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:45.623052    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.623052    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:45.623052    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:45.623052    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:45.651554    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:45.651554    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:45.701716    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:45.701768    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:45.766248    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:45.766248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:45.806341    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:45.806341    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:45.895675    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.401571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:48.432481    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:48.466418    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.466418    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:48.471424    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:48.503617    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.503617    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:48.507677    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:48.541480    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.541480    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:48.547529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:48.579177    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.579177    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:48.585087    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:48.626465    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.626465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:48.630533    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:48.660304    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.660304    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:48.663999    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:48.694957    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.694957    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:48.699665    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:48.725908    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.725908    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:48.725908    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:48.725908    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:48.817395    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.817466    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:48.817466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:48.848226    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:48.848739    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:48.900060    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:48.900060    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:48.962797    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:48.962797    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.508647    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:51.536278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:51.573226    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.573323    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:51.578061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:51.614603    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.614603    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:51.619576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:51.647095    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.647095    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:51.652535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:51.680320    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.680369    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:51.684269    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:51.717798    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.717827    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:51.721877    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:51.750482    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.750482    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:51.754602    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:51.786216    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.786216    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:51.790834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:51.819030    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.819030    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:51.819030    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:51.819030    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:51.876069    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:51.876110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:51.938469    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:51.938469    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.980953    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:51.980953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:52.079938    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:52.079938    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:52.079938    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:54.616891    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:54.642146    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:54.675691    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.675691    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:54.679440    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:54.709522    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.709522    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:54.713343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:54.744053    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.744112    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:54.748148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:54.782163    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.782232    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:54.786128    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:54.817067    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.817067    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:54.820867    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:54.850003    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.850003    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:54.854439    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:54.882517    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.882566    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:54.886475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:54.917057    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.917057    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:54.917057    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:54.917057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:54.982333    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:54.982333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:55.023534    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:55.023534    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:55.136747    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:55.136823    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:55.136823    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:55.169237    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:55.169237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:57.723958    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:57.750382    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:57.784932    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.784932    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:57.788837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:57.815350    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.815350    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:57.819773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:57.850513    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.850513    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:57.854585    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:57.885405    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.885405    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:57.889340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:57.917143    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.917143    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:57.921061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:57.947843    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.947843    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:57.951577    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:57.983169    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.983169    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:57.986925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:58.016381    6576 logs.go:282] 0 containers: []
	W1205 08:06:58.016381    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:58.016381    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:58.016381    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:58.081766    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:58.081766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:58.122021    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:58.122021    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:58.216654    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:58.216654    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:58.216654    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:58.245369    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:58.245369    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:00.814255    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:00.841335    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:00.870336    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.870336    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:00.874294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:00.905321    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.905321    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:00.908814    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:00.940896    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.940896    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:00.944651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:00.975783    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.975855    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:00.979485    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:01.007166    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.007166    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:01.011052    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:01.038708    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.038708    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:01.043766    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:01.072944    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.072944    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:01.076562    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:01.104574    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.104623    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:01.104665    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:01.104665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:01.169748    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:01.169748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:01.210259    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:01.210259    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:01.310310    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:01.310310    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:01.310310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:01.336589    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:01.336589    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:03.889510    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:03.919078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:03.953291    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.953291    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:03.956276    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:03.986975    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.986975    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:03.991157    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:04.022935    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.022935    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:04.026117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:04.058273    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.058312    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:04.061868    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:04.093136    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.093136    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:04.096666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:04.122322    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.122349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:04.126167    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:04.158513    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.158545    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:04.161969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:04.190492    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.190569    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:04.190569    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:04.190569    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:04.259062    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:04.259062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:04.299558    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:04.299558    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:04.393556    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:04.393644    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:04.393644    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:04.420122    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:04.420122    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:06.976110    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:07.001980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:07.033975    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.033975    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:07.040090    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:07.069823    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.069823    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:07.074015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:07.103072    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.103072    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:07.107448    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:07.138770    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.138770    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:07.142987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:07.174660    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.174660    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:07.178913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:07.209719    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.209719    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:07.215472    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:07.243539    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.243539    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:07.248737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:07.279448    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.279448    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:07.279448    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:07.279448    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:07.345481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:07.346489    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:07.384275    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:07.384275    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:07.479588    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:07.479588    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:07.479588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:07.506786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:07.506786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:10.078099    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:10.103951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:10.139034    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.139034    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:10.142691    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:10.174629    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.174629    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:10.178323    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:10.206817    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.206817    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:10.210968    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:10.239729    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.239820    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:10.245043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:10.277712    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.277712    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:10.283741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:10.315362    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.315362    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:10.318268    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:10.346693    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.346693    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:10.350670    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:10.379081    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.379081    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:10.379081    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:10.379081    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:10.443299    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:10.443299    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:10.482497    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:10.482497    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:10.567024    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:10.567024    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:10.567024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:10.596635    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:10.596635    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:13.157670    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:13.186965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:13.222698    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.222730    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:13.226690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:13.261914    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.261957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:13.265780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:13.294590    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.294590    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:13.299066    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:13.329216    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.329216    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:13.334474    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:13.366263    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.366290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:13.369870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:13.398379    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.398379    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:13.402396    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:13.430465    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.430465    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:13.434253    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:13.462873    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.462905    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:13.462905    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:13.462949    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:13.525954    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:13.526955    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:13.566284    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:13.567284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:13.656971    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:13.656971    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:13.656971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:13.684284    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:13.684284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.241440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:16.268513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:16.302653    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.302653    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:16.306429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:16.337387    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.337387    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:16.342004    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:16.371449    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.371449    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:16.376376    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:16.406912    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.406912    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:16.410777    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:16.438875    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.438875    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:16.442983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:16.470299    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.470299    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:16.474336    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:16.504067    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.504067    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:16.508174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:16.536869    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.536869    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:16.536869    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:16.536869    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:16.624673    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:16.624703    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:16.624755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:16.653894    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:16.653894    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.701985    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:16.701985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:16.763148    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:16.763148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.307232    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:19.334513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:19.371034    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.371140    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:19.375038    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:19.403110    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.403186    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:19.407168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:19.435904    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.435904    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:19.440294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:19.470700    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.470700    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:19.474611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:19.502846    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.502915    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:19.506400    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:19.540483    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.540483    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:19.544695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:19.576470    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.576501    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:19.579834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:19.609587    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.609587    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:19.609587    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:19.609587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.653000    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:19.653000    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:19.747787    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:19.747787    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:19.747787    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:19.774804    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:19.774804    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:19.825222    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:19.825338    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.394074    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:22.419163    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:22.454202    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.454202    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:22.457716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:22.487462    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.487615    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:22.491427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:22.522398    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.522398    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:22.526148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:22.554536    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.554536    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:22.558447    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:22.590329    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.590401    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:22.595088    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:22.626553    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.626553    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:22.630372    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:22.658911    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.658911    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:22.662715    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:22.692369    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.692444    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:22.692468    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:22.692468    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.759391    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:22.759391    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:22.801415    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:22.801415    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:22.891643    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:22.891710    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:22.891738    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:22.922662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:22.922662    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:25.480645    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:25.506403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:25.536534    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.536600    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:25.540233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:25.568373    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.568373    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:25.572581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:25.604196    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.604196    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:25.608476    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:25.639923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.640007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:25.643813    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:25.673923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.673923    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:25.677542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:25.709156    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.709156    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:25.712910    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:25.744371    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.744371    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:25.750463    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:25.778113    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.778113    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:25.778113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:25.778113    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:25.842953    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:25.842953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:25.881310    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:25.881310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:25.976920    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:25.976920    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:25.976920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:26.005828    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:26.005889    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:28.568522    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:28.594981    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:28.628025    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.628025    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:28.631569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:28.661047    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.661047    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:28.664662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:28.692667    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.692667    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:28.696624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:28.725878    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.725944    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:28.730056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:28.758073    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.758129    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:28.761794    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:28.788812    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.788812    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:28.793030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:28.839778    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.839778    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:28.843937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:28.873288    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.873288    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:28.873288    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:28.873288    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:28.937414    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:28.937414    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:28.975610    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:28.975610    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:29.110286    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:29.110286    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:29.110286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:29.140120    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:29.140120    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:31.695315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:31.723717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:31.755093    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.755155    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:31.758672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:31.786260    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.786260    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:31.790917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:31.817450    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.817450    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:31.822438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:31.852769    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.852788    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:31.856218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:31.885715    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.885715    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:31.890036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:31.919240    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.919240    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:31.924888    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:31.956860    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.956860    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:31.960848    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:31.989055    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.989055    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:31.989055    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:31.989055    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:32.055751    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:32.055751    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:32.091848    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:32.091848    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:32.183494    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:32.183494    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:32.183494    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:32.211020    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:32.211056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:34.770702    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:34.796134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:34.830020    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.830052    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:34.833506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:34.860829    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.860829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:34.864718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:34.895302    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.895302    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:34.899305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:34.928933    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.928933    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:34.935599    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:34.964256    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.964280    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:34.967945    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:34.995571    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.995571    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:35.001155    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:35.038603    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.038603    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:35.042249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:35.075025    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.075025    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:35.075025    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:35.075025    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:35.136020    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:35.136020    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:35.198233    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:35.198233    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:35.236713    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:35.236713    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:35.327635    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:35.327659    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:35.327659    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:37.859618    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:37.890074    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:37.922724    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.922724    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:37.926571    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:37.959720    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.959720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:37.963770    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:37.991602    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.991602    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:37.995673    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:38.023771    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.023771    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:38.030170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:38.061676    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.061676    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:38.065660    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:38.116492    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.116542    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:38.122475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:38.151483    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.151483    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:38.155624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:38.184512    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.184512    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:38.184512    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:38.184512    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:38.221972    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:38.221972    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:38.315283    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:38.315283    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:38.315283    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:38.342209    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:38.342209    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:38.391392    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:38.391470    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:40.955418    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:40.982062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:41.015938    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.015938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:41.019996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:41.049917    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.049917    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:41.052925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:41.084946    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.084946    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:41.088068    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:41.120218    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.120297    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:41.123688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:41.152948    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.152948    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:41.156508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:41.183795    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.183795    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:41.187681    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:41.217097    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.217097    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:41.221130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:41.252354    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.252354    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:41.252354    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:41.252354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:41.345903    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:41.345903    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:41.345903    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:41.373149    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:41.373149    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:41.423553    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:41.423553    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:41.485144    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:41.485144    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.029139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:44.056384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:44.087995    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.088078    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:44.091865    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:44.118934    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.118934    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:44.122494    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:44.150822    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.150864    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:44.154454    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:44.183401    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.183401    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:44.187086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:44.214588    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.214644    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:44.217896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:44.249548    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.249548    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:44.253290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:44.281230    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.281230    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:44.284996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:44.314362    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.314426    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:44.314426    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:44.314426    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:44.378166    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:44.378166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.420024    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:44.420024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:44.510942    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:44.510942    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:44.510942    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:44.539432    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:44.539482    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:47.095962    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:47.121976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:47.155042    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.155042    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:47.159040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:47.188768    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.188768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:47.192847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:47.220500    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.220500    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:47.224299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:47.252483    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.252483    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:47.256264    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:47.285852    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.285852    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:47.290573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:47.319383    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.319450    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:47.323007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:47.353203    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.353203    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:47.357241    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:47.385498    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.385498    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:47.385498    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:47.385498    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:47.449686    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:47.449686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:47.490407    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:47.490407    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:47.577868    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:47.577868    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:47.577868    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:47.604652    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:47.604652    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:50.157279    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:50.184328    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:50.218852    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.218852    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:50.222438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:50.250551    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.250571    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:50.254169    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:50.285371    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.285424    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:50.289741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:50.320093    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.320093    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:50.323845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:50.357038    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.357084    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:50.360291    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:50.389753    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.389829    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:50.392859    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:50.423710    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.423710    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:50.427343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:50.454456    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.454456    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:50.454456    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:50.454456    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:50.516581    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:50.516581    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:50.555412    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:50.555412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:50.648402    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:50.648402    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:50.648402    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:50.673701    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:50.673701    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:53.230542    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:53.256707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:53.290781    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.290781    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:53.294254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:53.326261    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.326261    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:53.329838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:53.359630    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.359630    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:53.364896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:53.396046    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.396046    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:53.400120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:53.428713    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.428713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:53.432409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:53.462479    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.462479    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:53.467583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:53.495306    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.495306    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:53.499565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:53.530622    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.530622    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:53.530622    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:53.530622    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:53.593183    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:53.593183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:53.633807    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:53.633807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:53.721016    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:53.721016    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:53.721016    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:53.748333    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:53.748442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.315862    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:56.341452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:56.374032    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.374063    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:56.377843    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:56.408635    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.408698    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:56.412330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:56.442083    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.442083    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:56.445380    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:56.473679    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.473749    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:56.477263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:56.506107    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.506156    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:56.510975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:56.538958    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.539022    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:56.542581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:56.572303    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.572303    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:56.576375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:56.604073    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.604073    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:56.604073    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:56.604145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:56.641552    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:56.641552    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:56.734944    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:56.735002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:56.735046    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:56.770367    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:56.770412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.826378    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:56.826378    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.393300    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:59.417617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:59.452220    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.452220    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:59.456092    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:59.484787    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.484787    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:59.488348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:59.516670    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.516670    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:59.521214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:59.548048    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.548048    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:59.551862    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:59.576869    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.576869    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:59.581825    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:59.610579    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.610579    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:59.614523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:59.642507    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.642507    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:59.646397    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:59.675062    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.675062    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:59.675062    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:59.675062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.739704    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:59.739704    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:59.782363    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:59.782363    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:59.876076    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:59.876076    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:59.876076    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:59.903005    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:59.903005    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:02.456978    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:02.483895    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:02.516374    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.516374    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:02.520443    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:02.553066    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.553148    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:02.556844    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:02.585220    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.585220    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:02.589183    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:02.620655    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.620655    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:02.625389    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:02.659292    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.659369    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:02.662727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:02.690972    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.690972    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:02.694944    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:02.723751    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.723797    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:02.727357    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:02.764750    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.764750    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:02.764750    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:02.764750    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:02.834733    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:02.834733    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:02.873432    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:02.873432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:02.963503    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:02.963503    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:02.963503    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:02.992067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:02.992067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:05.547340    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:05.572946    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:05.605473    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.605473    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:05.609479    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:05.639072    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.639072    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:05.642702    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:05.674126    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.674174    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:05.678318    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:05.710378    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.710378    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:05.713988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:05.743263    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.743263    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:05.748802    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:05.777467    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.777467    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:05.781993    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:05.816147    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.816147    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:05.820044    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:05.849173    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.849173    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:05.849173    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:05.849173    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:05.937771    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:05.937771    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:05.937771    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:05.965110    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:05.965110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:06.012927    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:06.012927    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:06.076287    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:06.076287    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:08.621402    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:08.647297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:08.678598    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.678679    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:08.681866    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:08.710779    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.710856    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:08.714554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:08.745379    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.745379    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:08.750135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:08.785796    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.785840    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:08.791900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:08.823728    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.823778    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:08.827659    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:08.858652    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.858726    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:08.862304    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:08.893238    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.893287    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:08.896783    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:08.927578    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.927578    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:08.927578    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:08.927578    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:08.990752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:08.990752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:09.030509    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:09.030509    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:09.116112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:09.116629    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:09.116629    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:09.148307    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:09.148307    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:11.720341    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:11.750190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:11.784223    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.784247    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:11.789837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:11.819184    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.819184    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:11.824438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:11.852058    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.852058    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:11.857984    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:11.888391    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.888391    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:11.891707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:11.921973    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.921973    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:11.925426    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:11.953845    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.953845    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:11.957863    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:11.987150    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.987236    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:11.990921    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:12.018843    6576 logs.go:282] 0 containers: []
	W1205 08:08:12.018895    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:12.018895    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:12.018918    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:12.048523    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:12.048523    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:12.099490    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:12.099490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:12.163368    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:12.163368    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:12.204867    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:12.204867    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:12.290894    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:14.795945    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:14.821749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:14.851399    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.851399    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:14.855010    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:14.887370    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.887370    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:14.891117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:14.922139    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.922139    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:14.926245    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:14.954095    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.954095    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:14.959551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:14.987564    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.987564    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:14.991080    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:15.023941    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.023941    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:15.027344    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:15.056411    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.056474    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:15.059417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:15.092400    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.092400    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:15.092400    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:15.092400    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:15.119932    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:15.119932    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:15.169067    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:15.169067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:15.232603    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:15.232603    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:15.276106    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:15.276106    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:15.363421    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:17.870108    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:17.895889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:17.927528    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.927528    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:17.931166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:17.959105    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.959105    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:17.962846    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:17.994011    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.994011    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:17.998047    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:18.026606    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.026677    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:18.030234    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:18.061389    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.061389    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:18.065290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:18.096454    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.096454    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:18.100320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:18.129213    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.129213    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:18.133040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:18.160088    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.160111    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:18.160111    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:18.160111    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:18.221228    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:18.221228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:18.258886    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:18.258886    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:18.348416    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:18.348496    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:18.348525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:18.379855    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:18.379855    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:20.936239    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:20.959002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:20.990013    6576 logs.go:282] 0 containers: []
	W1205 08:08:20.990085    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:20.993773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:21.021884    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.021925    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:21.025964    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:21.054531    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.054531    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:21.058277    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:21.088997    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.089078    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:21.092631    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:21.121326    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.121360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:21.125135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:21.160429    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.160496    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:21.164226    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:21.192488    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.192557    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:21.196294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:21.228406    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.228445    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:21.228445    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:21.228495    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:21.291604    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:21.292600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:21.331218    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:21.331218    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:21.412454    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:21.412454    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:21.412454    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:21.441164    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:21.441229    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:23.994395    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:24.020275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:24.054682    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.054682    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:24.058674    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:24.089654    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.089654    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:24.093569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:24.123224    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.123224    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:24.127942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:24.155350    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.155350    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:24.159192    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:24.192652    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.192652    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:24.197194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:24.229851    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.229851    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:24.233957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:24.262158    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.262158    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:24.266478    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:24.297683    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.297766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:24.297766    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:24.297766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:24.388464    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:24.388464    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:24.388464    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:24.416764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:24.416764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:24.468678    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:24.469203    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:24.532678    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:24.532678    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.075175    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:27.104797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:27.137440    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.137440    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:27.141581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:27.171103    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.171126    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:27.174625    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:27.205068    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.205102    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:27.208711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:27.237765    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.237806    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:27.241719    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:27.269838    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.269838    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:27.273353    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:27.300835    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.300835    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:27.304633    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:27.333062    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.333062    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:27.338523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:27.366572    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.366572    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:27.366572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:27.366572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.402514    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:27.402514    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:27.499452    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:27.499452    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:27.499452    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:27.528089    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:27.528089    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:27.596881    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:27.596881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.168154    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:30.194986    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:30.228709    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.228709    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:30.233961    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:30.268256    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.268256    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:30.271667    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:30.300456    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.300519    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:30.303870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:30.335955    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.335955    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:30.339590    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:30.367829    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.367829    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:30.373123    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:30.401294    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.401327    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:30.404974    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:30.436526    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.436526    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:30.440246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:30.478544    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.478599    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:30.478599    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:30.478651    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.544716    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:30.544716    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:30.584496    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:30.584496    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:30.671308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:30.671352    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:30.671352    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:30.699029    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:30.699029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:33.251744    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:33.280500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:33.311912    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.311912    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:33.316407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:33.347966    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.347966    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:33.351341    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:33.386249    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.386249    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:33.389828    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:33.420571    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.420571    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:33.423584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:33.450599    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.450599    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:33.453949    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:33.488480    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.488480    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:33.492797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:33.523382    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.523382    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:33.526929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:33.561860    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.561860    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:33.561860    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:33.561860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:33.628425    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:33.628425    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:33.666453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:33.666453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:33.756872    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:33.756872    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:33.756872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:33.785780    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:33.785780    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:36.342322    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:36.368238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:36.399529    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.399529    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:36.402710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:36.430561    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.430561    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:36.434233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:36.461894    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.461894    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:36.466270    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:36.492354    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.492354    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:36.495668    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:36.526818    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.526818    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:36.530606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:36.564752    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.564752    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:36.569130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:36.598403    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.598403    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:36.603579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:36.635757    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.635757    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:36.635757    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:36.635757    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:36.702715    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:36.702715    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:36.740740    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:36.740740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:36.827779    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:36.827779    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:36.827779    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:36.855113    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:36.855148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.404078    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:39.428626    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:39.461540    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.461540    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:39.465369    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:39.497259    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.497368    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:39.501168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:39.532526    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.532526    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:39.537388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:39.570114    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.570114    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:39.574332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:39.607392    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.607392    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:39.611100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:39.640933    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.640933    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:39.644381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:39.673224    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.673224    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:39.678235    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:39.706766    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.706766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:39.706766    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:39.706766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:39.734527    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:39.734527    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.787138    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:39.787138    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:39.849637    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:39.849637    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:39.889331    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:39.889331    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:39.977390    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:42.481792    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:42.508550    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:42.541632    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.541632    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:42.545635    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:42.595829    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.595829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:42.601196    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:42.630888    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.630888    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:42.634929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:42.665451    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.665451    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:42.668581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:42.701244    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.701244    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:42.705368    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:42.737250    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.737250    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:42.740441    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:42.766622    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.766700    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:42.770278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:42.801486    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.801486    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:42.801486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:42.801486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:42.866794    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:42.866930    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:42.906819    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:42.906819    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:43.000226    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:43.000226    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:43.000226    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:43.027011    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:43.027011    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:45.586794    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:45.615024    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:45.642666    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.642666    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:45.646348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:45.675867    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.675867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:45.679650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:45.711785    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.711785    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:45.717449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:45.750065    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.750109    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:45.753406    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:45.782908    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.782908    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:45.786362    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:45.816309    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.816309    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:45.819889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:45.847629    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.847656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:45.850622    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:45.880676    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.880733    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:45.880759    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:45.880759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:45.943843    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:45.943843    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:45.984212    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:45.984212    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:46.071821    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:46.071821    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:46.071821    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:46.098280    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:46.098280    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:48.651285    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:48.676952    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:48.706696    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.706696    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:48.710427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:48.738766    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.738766    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:48.746145    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:48.773486    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.773486    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:48.778542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:48.805908    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.805908    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:48.809817    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:48.840360    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.840360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:48.843723    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:48.871560    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.871560    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:48.875316    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:48.903556    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.903556    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:48.908924    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:48.938455    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.938455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:48.938455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:48.938455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:49.001951    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:49.001951    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:49.042098    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:49.042098    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:49.131350    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:49.131350    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:49.131350    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:49.166759    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:49.166759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:51.724851    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:51.752650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:51.780528    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.780542    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:51.784422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:51.816577    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.816577    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:51.819989    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:51.849244    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.849244    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:51.853211    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:51.881159    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.881222    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:51.884831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:51.917237    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.917237    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:51.921202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:51.951018    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.951018    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:51.955222    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:51.982262    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.982262    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:51.986170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:52.013482    6576 logs.go:282] 0 containers: []
	W1205 08:08:52.013526    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:52.013564    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:52.013564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:52.050334    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:52.050334    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:52.144178    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:52.144178    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:52.144178    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:52.171135    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:52.171135    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:52.223993    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:52.223993    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:54.792613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:54.817042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:54.848768    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.848768    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:54.852580    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:54.881045    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.881045    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:54.885194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:54.915368    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.915368    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:54.919753    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:54.952592    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.952679    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:54.956477    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:54.989304    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.989357    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:54.992976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:55.025855    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.025855    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:55.029407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:55.059218    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.059290    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:55.063529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:55.092992    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.092992    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:55.092992    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:55.092992    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:55.201249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:55.201249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:55.201249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:55.228877    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:55.228907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:55.286872    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:55.286872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:55.357844    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:55.357844    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:57.912434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:57.938621    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:57.968927    6576 logs.go:282] 0 containers: []
	W1205 08:08:57.968927    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:57.975548    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:58.003200    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.003200    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:58.006983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:58.037886    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.037886    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:58.041594    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:58.072037    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.072037    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:58.076711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:58.118201    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.118201    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:58.122059    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:58.150468    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.150468    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:58.154554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:58.186009    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.186009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:58.189676    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:58.219204    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.219204    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:58.219204    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:58.219204    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:58.283572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:58.283572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:58.322291    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:58.322291    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:58.406023    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:58.406023    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:58.406023    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:58.434361    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:58.434881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:00.986031    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:01.012520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:01.041860    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.041860    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:01.045736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:01.074168    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.074168    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:01.081136    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:01.115160    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.115160    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:01.121214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:01.152200    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.152200    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:01.155786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:01.187849    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.187849    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:01.193651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:01.220927    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.220927    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:01.225251    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:01.262648    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.262648    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:01.266549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:01.298388    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.298388    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:01.298459    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:01.298491    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:01.389098    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:01.389126    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:01.389126    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:01.418232    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:01.418232    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:01.463083    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:01.463083    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:01.528159    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:01.528159    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.078505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:04.106462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:04.136412    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.136412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:04.139845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:04.168393    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.168465    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:04.171965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:04.203281    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.203281    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:04.207129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:04.235244    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.235244    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:04.239720    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:04.271746    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.271746    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:04.279903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:04.308486    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.308486    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:04.312482    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:04.341988    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.341988    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:04.345122    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:04.378152    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.378152    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:04.378152    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:04.378152    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:04.443403    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:04.443403    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.484661    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:04.484661    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:04.574793    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:04.574793    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:04.574793    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:04.606357    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:04.606357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.162554    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:07.194738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:07.227905    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.227977    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:07.232048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:07.262861    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.262861    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:07.266595    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:07.297184    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.297184    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:07.300873    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:07.331523    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.331523    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:07.335838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:07.367893    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.367893    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:07.371282    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:07.400934    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.400934    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:07.403928    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:07.431616    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.431616    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:07.435314    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:07.469043    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.469043    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:07.469043    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:07.469043    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:07.497832    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:07.497832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.547846    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:07.547846    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:07.611682    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:07.611682    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:07.651105    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:07.651105    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:07.741756    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.247138    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:10.275755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:10.311911    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.311911    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:10.317436    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:10.347243    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.347243    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:10.353296    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:10.384412    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.384412    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:10.389236    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:10.419505    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.419505    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:10.423688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:10.451213    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.451213    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:10.457390    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:10.485001    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.485001    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:10.488370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:10.519268    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.519268    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:10.524029    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:10.551544    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.551544    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:10.551544    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:10.551544    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:10.618971    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:10.618971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:10.657753    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:10.657753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:10.751422    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.751422    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:10.751422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:10.777901    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:10.778003    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.340867    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:13.373007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:13.404147    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.404191    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:13.408078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:13.440768    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.440768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:13.444748    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:13.474390    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.474390    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:13.478381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:13.508004    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.508057    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:13.511749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:13.543789    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.543789    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:13.547384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:13.576308    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.576377    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:13.579736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:13.609792    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.609792    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:13.613298    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:13.642091    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.642091    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:13.642091    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:13.642091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:13.671624    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:13.671686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.718995    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:13.718995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:13.782056    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:13.782056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:13.821453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:13.821453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:13.928916    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.433905    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:16.459887    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:16.496160    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.496160    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:16.499639    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:16.526877    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.526877    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:16.530750    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:16.560261    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.560261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:16.563991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:16.595914    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.595914    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:16.599869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:16.627694    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.627694    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:16.632403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:16.660769    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.660769    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:16.664194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:16.692707    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.692707    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:16.698036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:16.728749    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.728749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:16.728749    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:16.728749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:16.778953    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:16.779017    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:16.841091    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:16.841091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:16.881145    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:16.881145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:16.969295    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.969332    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:16.969362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:19.502757    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:19.529429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:19.557499    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.557499    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:19.561490    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:19.590127    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.590127    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:19.594042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:19.622382    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.622382    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:19.626026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:19.653513    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.653513    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:19.656672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:19.686153    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.686153    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:19.691297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:19.720831    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.720858    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:19.724786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:19.751107    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.751107    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:19.754979    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:19.782999    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.782999    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:19.782999    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:19.782999    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:19.844801    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:19.844801    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:19.884439    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:19.884439    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:19.977224    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:19.977224    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:19.977224    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:20.007404    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:20.007404    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:22.569427    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:22.596121    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:22.628181    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.628181    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:22.632086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:22.660848    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.660848    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:22.664755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:22.694182    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.694261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:22.698085    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:22.726532    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.726600    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:22.730354    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:22.757319    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.757355    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:22.760937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:22.792791    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.792791    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:22.799388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:22.841372    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.841372    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:22.845285    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:22.879377    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.879377    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:22.879377    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:22.879377    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:22.946156    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:22.946156    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:22.990461    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:22.990461    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:23.119453    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:23.119453    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:23.119453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:23.146199    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:23.147241    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:25.703191    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:25.728570    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:25.758884    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.758884    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:25.765071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:25.792957    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.792957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:25.796556    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:25.825466    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.825466    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:25.828728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:25.857451    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.857521    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:25.861306    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:25.887700    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.887700    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:25.891071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:25.920875    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.920875    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:25.924452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:25.952908    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.952952    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:25.956305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:25.987608    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.987608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:25.987608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:25.987608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:26.027162    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:26.027162    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:26.120245    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:26.120245    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:26.120245    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:26.147670    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:26.147697    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:26.198923    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:26.198963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:28.769076    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:28.797716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:28.829859    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.829898    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:28.833257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:28.864507    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.864507    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:28.868407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:28.898827    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.898827    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:28.902971    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:28.933087    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.933087    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:28.937063    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:28.964140    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.964140    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:28.968403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:28.997620    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.997620    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:29.001779    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:29.035745    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.035745    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:29.038757    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:29.068429    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.068429    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:29.068429    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:29.068429    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:29.124688    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:29.124688    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:29.188675    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:29.188675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:29.227887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:29.227887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:29.312828    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:29.312828    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:29.312828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:31.845911    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:31.878797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:31.916523    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.916523    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:31.919583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:31.950914    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.950976    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:31.954687    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:31.983555    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.983580    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:31.987603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:32.021007    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.021007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:32.025190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:32.056980    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.057033    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:32.060500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:32.104780    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.104780    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:32.108815    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:32.135429    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.135494    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:32.138969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:32.171260    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.171260    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:32.171260    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:32.171260    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:32.237752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:32.237752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:32.277887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:32.277887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:32.365810    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:32.365810    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:32.365810    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:32.392252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:32.392252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:34.943627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:34.969529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:35.010672    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.010672    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:35.015462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:35.048036    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.048036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:35.055991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:35.103005    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.103005    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:35.106890    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:35.137906    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.137906    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:35.141530    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:35.172625    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.172625    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:35.176175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:35.209474    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.209474    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:35.213175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:35.244787    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.244787    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:35.248557    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:35.275127    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.275158    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:35.275158    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:35.275158    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:35.334298    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:35.334298    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:35.373969    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:35.373969    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:35.459656    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:35.459755    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:35.459755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:35.489057    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:35.489057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:38.049404    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:38.073507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:38.101267    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.101337    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:38.104951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:38.134276    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.134276    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:38.139127    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:38.166437    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.166437    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:38.170518    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:38.199145    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.199145    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:38.202760    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:38.230466    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.230466    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:38.233640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:38.263867    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.263867    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:38.267542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:38.297791    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.297791    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:38.301874    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:38.332980    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.332980    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:38.332980    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:38.332980    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:38.396086    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:38.396086    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:38.433018    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:38.433018    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:38.516847    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:38.516847    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:38.516847    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:38.545985    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:38.545985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:41.097758    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:41.125607    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:41.156423    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.156423    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:41.159823    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:41.188324    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.188383    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:41.192299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:41.224751    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.224789    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:41.228655    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:41.257790    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.257790    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:41.261606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:41.292935    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.292999    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:41.296487    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:41.322728    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.322728    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:41.326980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:41.355569    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.355569    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:41.359412    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:41.388228    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.388228    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:41.388228    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:41.388228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:41.454094    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:41.454094    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:41.492536    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:41.492536    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:41.584848    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:41.584892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:41.584892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:41.611807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:41.611807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:44.169483    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:44.196254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:44.224412    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.224412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:44.229628    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:44.257724    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.257724    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:44.262355    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:44.289872    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.289926    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:44.293506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:44.321891    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.321891    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:44.325045    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:44.354424    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.354424    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:44.357980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:44.388960    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.388960    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:44.392224    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:44.424484    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.424484    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:44.427710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:44.458834    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.458834    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:44.458834    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:44.458834    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:44.523336    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:44.523336    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:44.560362    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:44.560362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:44.656711    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:44.656711    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:44.656711    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:44.682009    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:44.683010    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.243380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:47.270606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:47.302678    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.302720    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:47.305835    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:47.334169    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.334213    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:47.338162    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:47.370622    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.370693    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:47.374238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:47.406764    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.406787    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:47.410449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:47.439290    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.439332    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:47.442816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:47.475239    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.475239    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:47.479100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:47.510196    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.510196    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:47.513831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:47.543315    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.543378    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:47.543378    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:47.543411    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:47.577600    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:47.577600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.651517    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:47.651517    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:47.717530    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:47.717530    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:47.757989    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:47.757989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:47.848615    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:50.354473    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:50.381662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:50.410303    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.410303    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:50.416210    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:50.443479    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.443479    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:50.447606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:50.475214    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.475214    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:50.479409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:50.508984    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.508984    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:50.513185    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:50.544532    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.544532    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:50.548200    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:50.578350    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.578350    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:50.583137    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:50.615656    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.615656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:50.619983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:50.649117    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.649117    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:50.649117    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:50.649117    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:50.678837    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:50.678837    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:50.730963    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:50.730963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:50.797442    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:50.797442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:50.839051    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:50.840050    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:50.934073    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.440116    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:53.465957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:53.497390    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.497462    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:53.501077    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:53.529488    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.529488    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:53.536331    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:53.563367    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.563367    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:53.566361    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:53.596894    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.596894    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:53.600611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:53.630623    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.630623    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:53.634434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:53.664123    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.664123    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:53.668403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:53.697948    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.697948    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:53.701419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:53.730378    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.730462    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:53.730462    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:53.730462    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:53.798465    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:53.798465    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:53.841124    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:53.841124    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:53.935344    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.936318    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:53.936318    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:53.965040    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:53.965040    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:56.520907    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:56.551718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:56.584506    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.584506    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:56.588065    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:56.618214    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.618214    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:56.622199    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:56.650798    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.650798    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:56.654367    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:56.685409    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.685440    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:56.688781    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:56.719049    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.719163    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:56.722810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:56.753646    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.753646    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:56.757666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:56.793942    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.793942    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:56.798049    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:56.827315    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.827315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:56.827315    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:56.827315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:56.893213    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:56.893213    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:56.931234    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:56.931234    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:57.020142    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:57.020142    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:57.020142    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:57.048871    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:57.048871    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:59.606004    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:59.632524    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:59.662177    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.662177    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:59.666311    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:59.701152    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.701202    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:59.704398    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:59.733278    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.733278    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:59.738174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:59.769038    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.769038    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:59.773266    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:59.814259    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.814259    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:59.818330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:59.848066    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.848066    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:59.851684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:59.880029    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.880029    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:59.884457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:59.914608    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.914608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:59.914608    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:59.914608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:59.978490    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:59.978490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:00.018881    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:00.018881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:00.109744    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:00.109744    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:00.109744    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:00.137522    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:00.137591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:02.693722    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:02.718495    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:10:02.754864    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.754864    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:10:02.758547    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:10:02.795133    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.795231    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:10:02.798914    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:10:02.828115    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.828115    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:10:02.831263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:10:02.864241    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.864241    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:10:02.867861    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:10:02.895555    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.895555    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:10:02.901617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:10:02.931756    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.931756    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:10:02.935718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:10:02.964034    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.964034    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:10:02.968113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:10:03.000080    6576 logs.go:282] 0 containers: []
	W1205 08:10:03.000080    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:10:03.000080    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:03.000080    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:03.092694    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:03.094183    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:03.094183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:03.124625    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:03.124625    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:03.178920    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:10:03.178920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:10:03.237776    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:10:03.237776    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:05.783793    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:05.810874    6576 out.go:203] 
	W1205 08:10:05.812874    6576 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 08:10:05.812874    6576 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 08:10:05.812874    6576 out.go:285] * Related issues:
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 08:10:05.815880    6576 out.go:203] 
	
	
	==> Docker <==
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014561584Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014638592Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014649493Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014654993Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014662094Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014686897Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014806909Z" level=info msg="Initializing buildkit"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.159292906Z" level=info msg="Completed buildkit initialization"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170523657Z" level=info msg="Daemon has completed initialization"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170725677Z" level=info msg="API listen on [::]:2376"
	Dec 05 08:04:00 newest-cni-042100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170749180Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170751380Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Loaded network plugin cni"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 08:04:01 newest-cni-042100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:19.311956   20126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:19.313582   20126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:19.314464   20126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:19.317127   20126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:19.318420   20126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.912373] CPU: 10 PID: 467231 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f59c4559b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f59c4559af6.
	[  +0.000001] RSP: 002b:00007fff7b401a80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.986945] CPU: 6 PID: 467375 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f68553b7b20
	[  +0.000010] Code: Unable to access opcode bytes at RIP 0x7f68553b7af6.
	[  +0.000001] RSP: 002b:00007ffe7761e510 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 08:10:19 up  3:44,  0 user,  load average: 0.89, 2.21, 3.29
	Linux newest-cni-042100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:10:15 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:16 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 05 08:10:16 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:16 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:16 newest-cni-042100 kubelet[19934]: E1205 08:10:16.785430   19934 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:16 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:16 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:17 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 05 08:10:17 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:17 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:17 newest-cni-042100 kubelet[19964]: E1205 08:10:17.555031   19964 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:17 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:17 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:18 newest-cni-042100 kubelet[19994]: E1205 08:10:18.288532   19994 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:18 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:19 newest-cni-042100 kubelet[20043]: E1205 08:10:19.040463   20043 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:19 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:19 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (593.7236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-042100" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042100
helpers_test.go:243: (dbg) docker inspect newest-cni-042100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619",
	        "Created": "2025-12-05T07:52:58.091352749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T08:03:50.023797205Z",
	            "FinishedAt": "2025-12-05T08:03:46.631173784Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/hosts",
	        "LogPath": "/var/lib/docker/containers/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619/ee0c9d80d83ae226c5917a251879cd3cc8e4090b42883ed0f70f35338b837619-json.log",
	        "Name": "/newest-cni-042100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-042100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c9efcf7284a5076f16d6de672bc314d2a12eb36e68c5b125ff2e95afcdfabbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042100",
	                "Source": "/var/lib/docker/volumes/newest-cni-042100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042100",
	                "name.minikube.sigs.k8s.io": "newest-cni-042100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7425ef782ce126f539b7a23248f53aee42fe4667088eea6cf367858b569563e9",
	            "SandboxKey": "/var/run/docker/netns/7425ef782ce1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62708"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62709"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62710"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62711"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62712"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "174359b7b50b3bec7b4847d3ab43850e80d128f01a95736675cb3ceba87aab04",
	                    "EndpointID": "5e8b48011f9a64464c884645b921403d03309228e61384410733ff99b4453af2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042100",
	                        "ee0c9d80d83a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (612.0251ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25
E1205 08:10:22.692682    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-042100 logs -n 25: (1.7384535s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-218000 sudo systemctl status docker --all --full --no-pager          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;   │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat docker --no-pager                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo crio config                                               │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/docker/daemon.json                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo docker system info                                       │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p bridge-218000                                                                │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat cri-docker --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cri-dockerd --version                                    │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status containerd --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat containerd --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /lib/systemd/system/containerd.service               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/containerd/config.toml                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo containerd config dump                                   │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status crio --all --full --no-pager            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │                     │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat crio --no-pager                            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo crio config                                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p kubenet-218000                                                               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ image   │ newest-cni-042100 image list --format=json                                      │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ pause   │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ unpause │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	W1205 08:03:44.511207    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:46.513793    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	Log file created at: 2025/12/05 08:03:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:03:48.079593    6576 out.go:360] Setting OutFile to fd 1628 ...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.133685    6576 out.go:374] Setting ErrFile to fd 1512...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.149881    6576 out.go:368] Setting JSON to false
	I1205 08:03:48.152825    6576 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13085,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:03:48.152825    6576 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:03:48.159945    6576 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:03:48.164658    6576 notify.go:221] Checking for updates...
	I1205 08:03:48.167308    6576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:48.170547    6576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:03:48.173264    6576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:03:48.177277    6576 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:03:48.179134    6576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:03:48.182963    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:48.184223    6576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:03:48.306826    6576 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:03:48.310816    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.562528    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.540004205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.565521    6576 out.go:179] * Using the docker driver based on existing profile
	I1205 08:03:48.568528    6576 start.go:309] selected driver: docker
	I1205 08:03:48.568528    6576 start.go:927] validating driver "docker" against &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.568528    6576 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:03:48.621627    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.870676    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.852383077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.870676    6576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 08:03:48.870676    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:03:48.871676    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:03:48.871676    6576 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.874674    6576 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 08:03:48.876674    6576 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:03:48.879674    6576 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:03:48.881674    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:03:48.881674    6576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 08:03:48.924123    6576 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:48.965045    6576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:03:48.965045    6576 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 08:03:49.173795    6576 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:49.174041    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 08:03:49.174210    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 08:03:49.176070    6576 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:03:49.176070    6576 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:49.176610    6576 start.go:364] duration metric: took 539.2µs to acquireMachinesLock for "newest-cni-042100"
	I1205 08:03:49.176876    6576 start.go:96] Skipping create...Using existing machine configuration
	I1205 08:03:49.176954    6576 fix.go:54] fixHost starting: 
	I1205 08:03:49.185185    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:49.467905    6576 fix.go:112] recreateIfNeeded on newest-cni-042100: state=Stopped err=<nil>
	W1205 08:03:49.468085    6576 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 08:03:46.247259    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:48.745542    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:50.273234    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:48.514113    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:50.532984    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:53.014533    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:49.492567    6576 out.go:252] * Restarting existing docker container for "newest-cni-042100" ...
	I1205 08:03:49.497575    6576 cli_runner.go:164] Run: docker start newest-cni-042100
	I1205 08:03:50.779131    6576 cli_runner.go:217] Completed: docker start newest-cni-042100: (1.2815354s)
	I1205 08:03:50.788112    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:51.139299    6576 kic.go:430] container "newest-cni-042100" state is running.
	I1205 08:03:51.164376    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:51.273747    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:51.276892    6576 machine.go:94] provisionDockerMachine start ...
	I1205 08:03:51.284394    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:51.396042    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:51.397040    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:51.397040    6576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:03:51.400042    6576 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:03:52.385305    6576 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.385658    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 08:03:52.385720    6576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.211458s
	I1205 08:03:52.385800    6576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 08:03:52.435659    6576 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.435659    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 08:03:52.435659    6576 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2613971s
	I1205 08:03:52.435659    6576 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 08:03:52.467883    6576 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.468216    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 08:03:52.468216    6576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.2939732s
	I1205 08:03:52.468216    6576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 08:03:52.472465    6576 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.472465    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 08:03:52.472465    6576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2982024s
	I1205 08:03:52.472465    6576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 08:03:52.472991    6576 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.473088    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 08:03:52.473088    6576 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2988253s
	I1205 08:03:52.473088    6576 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 08:03:52.478918    6576 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.479537    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 08:03:52.479537    6576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3052743s
	I1205 08:03:52.479537    6576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 08:03:52.488107    6576 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.489284    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 08:03:52.489284    6576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.3150206s
	I1205 08:03:52.489284    6576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 08:03:52.587256    6576 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.588098    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 08:03:52.588098    6576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.413907s
	I1205 08:03:52.588098    6576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 08:03:52.588098    6576 cache.go:87] Successfully saved all images to host disk.
	W1205 08:03:50.818460    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	I1205 08:03:53.244351    4412 pod_ready.go:94] pod "coredns-66bc5c9577-zrgxp" is "Ready"
	I1205 08:03:53.244351    4412 pod_ready.go:86] duration metric: took 21.0105368s for pod "coredns-66bc5c9577-zrgxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.250834    4412 pod_ready.go:83] waiting for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.262503    4412 pod_ready.go:94] pod "etcd-bridge-218000" is "Ready"
	I1205 08:03:53.262503    4412 pod_ready.go:86] duration metric: took 11.6685ms for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.271087    4412 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.281426    4412 pod_ready.go:94] pod "kube-apiserver-bridge-218000" is "Ready"
	I1205 08:03:53.281426    4412 pod_ready.go:86] duration metric: took 10.3388ms for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.286385    4412 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.438718    4412 pod_ready.go:94] pod "kube-controller-manager-bridge-218000" is "Ready"
	I1205 08:03:53.438718    4412 pod_ready.go:86] duration metric: took 152.3311ms for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.641268    4412 pod_ready.go:83] waiting for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.039664    4412 pod_ready.go:94] pod "kube-proxy-8r4gs" is "Ready"
	I1205 08:03:54.039664    4412 pod_ready.go:86] duration metric: took 398.3895ms for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.241161    4412 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:94] pod "kube-scheduler-bridge-218000" is "Ready"
	I1205 08:03:54.641085    4412 pod_ready.go:86] duration metric: took 399.9175ms for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:40] duration metric: took 32.4419039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:54.749081    4412 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:03:54.754768    4412 out.go:179] * Done! kubectl is now configured to use "bridge-218000" cluster and "default" namespace by default
	W1205 08:03:55.516894    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:58.012284    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:54.578463    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.578463    6576 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 08:03:54.583153    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.645702    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.646148    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.646193    6576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 08:03:54.866524    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.872867    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.933417    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.934199    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.934272    6576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:03:55.129977    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:55.129977    6576 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:03:55.129977    6576 ubuntu.go:190] setting up certificates
	I1205 08:03:55.129977    6576 provision.go:84] configureAuth start
	I1205 08:03:55.133735    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:55.190185    6576 provision.go:143] copyHostCerts
	I1205 08:03:55.190185    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:03:55.190185    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:03:55.190984    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:03:55.191986    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:03:55.191986    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:03:55.192251    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:03:55.193178    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:03:55.193178    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:03:55.193462    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:03:55.194234    6576 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 08:03:55.277216    6576 provision.go:177] copyRemoteCerts
	I1205 08:03:55.282373    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:03:55.285821    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.350220    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:55.476652    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:03:55.511250    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:03:55.546706    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 08:03:55.583614    6576 provision.go:87] duration metric: took 453.6304ms to configureAuth
	I1205 08:03:55.583614    6576 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:03:55.585275    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:55.589206    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.651189    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.652212    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.652246    6576 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:03:55.836329    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:03:55.837449    6576 ubuntu.go:71] root file system type: overlay
	I1205 08:03:55.837646    6576 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:03:55.841558    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.910453    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.911069    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.911069    6576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:03:56.123635    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:03:56.128031    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.191540    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:56.191765    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:56.191765    6576 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:03:56.396364    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:56.396364    6576 machine.go:97] duration metric: took 5.1193899s to provisionDockerMachine
	I1205 08:03:56.396364    6576 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 08:03:56.396897    6576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:03:56.402233    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:03:56.406223    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.460168    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.609105    6576 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:03:56.617925    6576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:03:56.617925    6576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:03:56.618732    6576 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:03:56.623542    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:03:56.637899    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:03:56.671787    6576 start.go:296] duration metric: took 274.8468ms for postStartSetup
	I1205 08:03:56.675921    6576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:03:56.678948    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.735289    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.884826    6576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:03:56.893835    6576 fix.go:56] duration metric: took 7.7168367s for fixHost
	I1205 08:03:56.893835    6576 start.go:83] releasing machines lock for "newest-cni-042100", held for 7.7169474s
	I1205 08:03:56.896826    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:56.959384    6576 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:03:56.965413    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.966255    6576 ssh_runner.go:195] Run: cat /version.json
	I1205 08:03:56.973872    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:57.022198    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:57.026201    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	W1205 08:03:57.148711    6576 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:03:57.162212    6576 ssh_runner.go:195] Run: systemctl --version
	I1205 08:03:57.181097    6576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:03:57.193288    6576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:03:57.197753    6576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:03:57.214357    6576 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 08:03:57.214357    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.214357    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.214357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:57.242461    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:03:57.262818    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 08:03:57.264705    6576 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:03:57.264749    6576 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:03:57.282712    6576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:03:57.286891    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:57.310466    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.333091    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:57.356105    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.377603    6576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:57.401090    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:57.423330    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:57.445407    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:57.472206    6576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:57.488210    6576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:57.505210    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:57.657790    6576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:57.802417    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.802417    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.807146    6576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:57.832467    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.857712    6576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:57.930272    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.960276    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:57.984286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:58.017277    6576 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:58.032288    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:58.048281    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:03:58.077282    6576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:58.275290    6576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:58.457293    6576 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:58.457293    6576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:58.486286    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:58.509287    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:58.648318    6576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:04:00.173930    6576 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5255881s)
	I1205 08:04:00.177929    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:04:00.201541    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:04:00.228851    6576 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 08:04:00.259044    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:00.283032    6576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:04:00.429299    6576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:04:00.593446    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.738544    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:04:00.766865    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:04:00.791407    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.930315    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:04:01.041317    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:01.059628    6576 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:04:01.064630    6576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:04:01.072635    6576 start.go:564] Will wait 60s for crictl version
	I1205 08:04:01.076636    6576 ssh_runner.go:195] Run: which crictl
	I1205 08:04:01.090615    6576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:04:01.132099    6576 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:04:01.136068    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.182106    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.227459    6576 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 08:04:01.231071    6576 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 08:04:01.375969    6576 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:04:01.379962    6576 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:04:01.387350    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.408320    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:01.468320    6576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 08:04:00.335905    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:00.512126    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:04:03.018493    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:04:01.471323    6576 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:04:01.471323    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:04:01.475324    6576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:04:01.511342    6576 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:04:01.512362    6576 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:04:01.512362    6576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 08:04:01.512362    6576 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:04:01.515327    6576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:04:01.600646    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:04:01.600646    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:04:01.600646    6576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 08:04:01.600646    6576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:04:01.600646    6576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:04:01.604645    6576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 08:04:01.617663    6576 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:04:01.621646    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:04:01.634708    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 08:04:01.659457    6576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 08:04:01.681516    6576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 08:04:01.709549    6576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:04:01.717165    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.737936    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:01.886462    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:01.908845    6576 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 08:04:01.908845    6576 certs.go:195] generating shared ca certs ...
	I1205 08:04:01.908845    6576 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:01.910250    6576 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:04:01.910428    6576 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:04:01.910428    6576 certs.go:257] generating profile certs ...
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 08:04:01.911645    6576 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 08:04:01.912393    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:04:01.912708    6576 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:04:01.912818    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:04:01.913766    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:04:01.914884    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:04:01.946745    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:04:01.978670    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:04:02.020771    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:04:02.052789    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 08:04:02.083785    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:04:02.111686    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:04:02.138106    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:04:02.167957    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:04:02.197699    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:04:02.228974    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:04:02.258542    6576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:04:02.283541    6576 ssh_runner.go:195] Run: openssl version
	I1205 08:04:02.296537    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.312534    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:04:02.327543    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.334539    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.339544    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.392223    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:04:02.408977    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.424981    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:04:02.439981    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.446982    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.451985    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.500175    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:04:02.518368    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.537597    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:04:02.555653    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.562656    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.566659    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.617005    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:04:02.635329    6576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:04:02.649383    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 08:04:02.697863    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 08:04:02.747535    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 08:04:02.802236    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 08:04:02.853222    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 08:04:02.901642    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 08:04:02.946962    6576 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:04:02.951256    6576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:04:02.986478    6576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:04:02.999955    6576 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 08:04:02.999955    6576 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 08:04:03.003999    6576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 08:04:03.019291    6576 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 08:04:03.022819    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.083372    6576 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.084185    6576 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042100" cluster setting kubeconfig missing "newest-cni-042100" context setting]
	I1205 08:04:03.084741    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.109144    6576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 08:04:03.128232    6576 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 08:04:03.138905    6576 kubeadm.go:602] duration metric: took 138.9481ms to restartPrimaryControlPlane
	I1205 08:04:03.138905    6576 kubeadm.go:403] duration metric: took 191.9404ms to StartCluster
	I1205 08:04:03.138905    6576 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.138905    6576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.141698    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.142419    6576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:04:03.142419    6576 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:04:03.142419    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:04:03.163290    6576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting dashboard=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon dashboard=true in "newest-cni-042100"
	W1205 08:04:03.163290    6576 addons.go:248] addon dashboard should already be in state true
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.192363    6576 out.go:179] * Verifying Kubernetes components...
	I1205 08:04:03.249622    6576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-042100"
	I1205 08:04:03.250609    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.257607    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.258609    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:03.261608    6576 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 08:04:03.264610    6576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:04:03.309607    6576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.309607    6576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:04:03.312609    6576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.312609    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:04:03.312609    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.315610    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.318607    6576 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 08:04:03.510751    7752 pod_ready.go:94] pod "coredns-66bc5c9577-gsfxl" is "Ready"
	I1205 08:04:03.510751    7752 pod_ready.go:86] duration metric: took 25.5102081s for pod "coredns-66bc5c9577-gsfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.517746    7752 pod_ready.go:83] waiting for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.529764    7752 pod_ready.go:94] pod "etcd-kubenet-218000" is "Ready"
	I1205 08:04:03.529764    7752 pod_ready.go:86] duration metric: took 12.0185ms for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.535749    7752 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.544756    7752 pod_ready.go:94] pod "kube-apiserver-kubenet-218000" is "Ready"
	I1205 08:04:03.544756    7752 pod_ready.go:86] duration metric: took 9.007ms for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.549745    7752 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.706418    7752 pod_ready.go:94] pod "kube-controller-manager-kubenet-218000" is "Ready"
	I1205 08:04:03.706418    7752 pod_ready.go:86] duration metric: took 156.6708ms for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.906896    7752 pod_ready.go:83] waiting for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.305526    7752 pod_ready.go:94] pod "kube-proxy-l9mnz" is "Ready"
	I1205 08:04:04.305526    7752 pod_ready.go:86] duration metric: took 398.0934ms for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.506453    7752 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:94] pod "kube-scheduler-kubenet-218000" is "Ready"
	I1205 08:04:04.908413    7752 pod_ready.go:86] duration metric: took 401.8894ms for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:40] duration metric: took 37.4190345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:04:05.004707    7752 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:04:05.007705    7752 out.go:179] * Done! kubectl is now configured to use "kubenet-218000" cluster and "default" namespace by default
	I1205 08:04:03.344609    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 08:04:03.344609    6576 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 08:04:03.353008    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.373762    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.389748    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.415749    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.454747    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:03.481745    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.544756    6576 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:04:03.550761    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:03.552751    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.556766    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 08:04:03.556766    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 08:04:03.561743    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.627813    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 08:04:03.627923    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 08:04:03.654463    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 08:04:03.654463    6576 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 08:04:03.731575    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 08:04:03.731654    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1205 08:04:03.751356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.751356    6576 retry.go:31] will retry after 148.467646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.754346    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	W1205 08:04:03.755354    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.755354    6576 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 08:04:03.755354    6576 retry.go:31] will retry after 202.130528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.774491    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 08:04:03.774491    6576 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 08:04:03.793803    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 08:04:03.793803    6576 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 08:04:03.828295    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 08:04:03.828351    6576 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 08:04:03.851355    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.851355    6576 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 08:04:03.876402    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.905217    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:03.957742    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.957742    6576 retry.go:31] will retry after 291.655688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.962256    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:03.992521    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.992521    6576 retry.go:31] will retry after 561.792628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.049441    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:04.057481    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.057556    6576 retry.go:31] will retry after 288.112081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.254701    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.343216    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.343216    6576 retry.go:31] will retry after 359.979776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.350062    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:04.431174    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.431174    6576 retry.go:31] will retry after 483.679942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.549772    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:04.559147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:04.642871    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.642871    6576 retry.go:31] will retry after 528.970083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.708123    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.787283    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.787283    6576 retry.go:31] will retry after 459.684582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.919229    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.004707    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.004707    6576 retry.go:31] will retry after 831.823948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.050298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.177969    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:05.252148    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:05.268807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.268914    6576 retry.go:31] will retry after 1.219301827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:05.381615    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.381684    6576 retry.go:31] will retry after 1.003502336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.548840    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.841493    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.945714    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.945714    6576 retry.go:31] will retry after 1.344373684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.051495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:06.390219    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:06.476859    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.476859    6576 retry.go:31] will retry after 916.677354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.493513    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:06.550586    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:06.586142    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.586142    6576 retry.go:31] will retry after 814.667109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.049968    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:07.295279    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:07.385161    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.385225    6576 retry.go:31] will retry after 2.309719888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.397737    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:07.404241    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.24760459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.229405263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.550637    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.050329    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:10.375252    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:04:08.551330    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.052416    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.549628    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.699045    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:09.722067    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:09.740066    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:09.854063    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.854063    6576 retry.go:31] will retry after 1.718952919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.926061    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.926061    6576 retry.go:31] will retry after 2.401961347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.960056    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.961057    6576 retry.go:31] will retry after 3.751594778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:10.049061    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:10.549298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.049797    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.550139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.577133    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:11.663155    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:11.663155    6576 retry.go:31] will retry after 4.120114825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.049572    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:12.333014    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:12.419653    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.419653    6576 retry.go:31] will retry after 2.740389125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.549673    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.050128    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.549901    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.717839    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:13.806807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:13.806807    6576 retry.go:31] will retry after 4.752661147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:14.050521    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:14.551720    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.050682    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.165926    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:15.256271    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.256271    6576 retry.go:31] will retry after 4.534312748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.549805    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.787818    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:15.865098    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.865628    6576 retry.go:31] will retry after 5.383695211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:16.050434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:16.549442    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.049923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.550083    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.049667    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:19.104488    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 08:04:19.104793    4560 node_ready.go:38] duration metric: took 6m0.001013s for node "no-preload-104100" to be "Ready" ...
	I1205 08:04:19.107356    4560 out.go:203] 
	W1205 08:04:19.110511    4560 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 08:04:19.110554    4560 out.go:285] * 
	W1205 08:04:19.112383    4560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:04:19.116573    4560 out.go:203] 
	I1205 08:04:18.551343    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.565349    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:18.647263    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:18.647263    6576 retry.go:31] will retry after 8.382323881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.050424    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.796280    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:19.904265    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.904265    6576 retry.go:31] will retry after 5.117792571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:20.052293    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:20.550380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.052677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.255736    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:21.356356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.356356    6576 retry.go:31] will retry after 8.875197166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.550333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.049310    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.550338    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.050244    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.551039    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.050874    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.550399    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:25.027043    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:25.050989    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:25.159593    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.159593    6576 retry.go:31] will retry after 7.802785807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.553440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.050359    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.551986    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:27.034606    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:27.050924    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:27.141503    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.141551    6576 retry.go:31] will retry after 13.674183061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.553694    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.049210    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.550842    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.051091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.549571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.051474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.237147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:30.345143    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.345143    6576 retry.go:31] will retry after 18.684554823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.552505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.050974    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.550315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.053025    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.550841    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.967139    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:33.050008    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:33.074001    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.074001    6576 retry.go:31] will retry after 21.457353412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.550375    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.053598    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.050034    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.050947    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.552933    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.049827    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.551205    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.050234    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.552156    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.050748    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.549737    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.050549    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.550949    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.819283    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:40.946292    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:40.946292    6576 retry.go:31] will retry after 18.180546633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:41.051295    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:41.551923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.051010    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.550802    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.050090    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.549595    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.050323    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.551060    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.050284    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.549318    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.049045    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.550390    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.050869    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.549920    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.050040    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:49.037573    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:49.050392    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:49.132808    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.132808    6576 retry.go:31] will retry after 12.282235903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.549952    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.052465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.550412    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.053026    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.551123    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.050959    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.550243    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.051085    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.550766    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.053585    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.537931    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:54.551106    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:54.662326    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:54.662326    6576 retry.go:31] will retry after 25.982171867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:55.050927    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:55.551197    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.049847    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.551717    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.050571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.552306    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.050495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.550960    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.050091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.133373    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:59.223117    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.223117    6576 retry.go:31] will retry after 23.551015037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.551231    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.047738    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.550465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.051875    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.420389    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:01.505728    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.505728    6576 retry.go:31] will retry after 17.206812229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.551821    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.051028    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.550994    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.051369    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.550326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:03.585938    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.585938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:03.590134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:03.617879    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.617879    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:03.624332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:03.651940    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.651940    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:03.656120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:03.685733    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.685733    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:03.690030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:03.719658    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.719713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:03.723576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:03.755797    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.755797    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:03.760966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:03.789461    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.789461    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:03.793178    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:03.823147    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.823147    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:03.823147    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:03.823679    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:03.890829    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:03.890829    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:03.937573    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:03.937573    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:04.028268    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:04.028268    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:04.028268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:04.054265    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:04.054265    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.624597    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:06.650113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:06.681568    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.682088    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:06.685527    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:06.715181    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.715181    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:06.718768    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:06.748649    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.748692    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:06.752313    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:06.783519    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.783582    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:06.787257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:06.817858    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.817858    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:06.821703    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:06.854241    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.854241    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:06.857773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:06.888901    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.888901    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:06.894071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:06.923675    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.923675    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:06.923675    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:06.923675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.974113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:06.974166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:07.037689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:07.037689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:07.080588    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:07.080588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:07.171034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:07.171067    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:07.171067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:09.706054    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:09.732108    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:09.767273    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.767300    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:09.770837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:09.802479    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.802550    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:09.806320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:09.835537    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.835537    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:09.841566    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:09.874578    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.874578    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:09.878148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:09.906942    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.907017    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:09.910154    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:09.941197    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.941197    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:09.945133    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:09.974591    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.974591    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:09.978698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:10.007749    6576 logs.go:282] 0 containers: []
	W1205 08:05:10.007749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:10.007749    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:10.007749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:10.044236    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:10.044236    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:10.130995    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:10.130995    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:10.130995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:10.158359    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:10.158945    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:10.209053    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:10.209053    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:12.782787    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:12.809043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:12.839958    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.839958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:12.845180    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:12.876657    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.876720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:12.880739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:12.908227    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.908227    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:12.912011    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:12.942400    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.942449    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:12.945431    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:12.973155    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.973155    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:12.976739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:13.004259    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.004259    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:13.008151    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:13.038225    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.038225    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:13.041692    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:13.070500    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.070500    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:13.070500    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:13.070500    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:13.134608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:13.134608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:13.173994    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:13.173994    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:13.270602    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:13.270665    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:13.270665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:13.299297    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:13.299297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:15.870600    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:15.895506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:15.927013    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.927013    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:15.930717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:15.959875    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.959941    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:15.963955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:15.992862    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.992862    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:15.996303    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:16.023966    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.023966    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:16.027786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:16.058698    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.058698    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:16.065246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:16.094826    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.094826    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:16.098650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:16.144774    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.144820    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:16.148422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:16.177296    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.177296    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:16.177296    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:16.177296    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:16.242225    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:16.242225    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:16.283778    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:16.283778    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:16.378623    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:16.378623    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:16.378623    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:16.408736    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:16.409256    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:18.719251    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:18.815541    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:18.815541    6576 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:18.959261    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:18.983847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:19.016048    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.016048    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:19.022913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:19.054693    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.054752    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:19.058555    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:19.087342    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.087342    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:19.090772    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:19.118199    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.118199    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:19.121567    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:19.151346    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.151346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:19.155305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:19.186521    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.186611    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:19.190219    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:19.220730    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.220730    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:19.225064    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:19.255890    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.256013    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:19.256013    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:19.256013    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:19.324476    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:19.324476    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:19.362802    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:19.362802    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:19.443537    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:19.444546    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:19.444546    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:19.474585    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:19.474647    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:20.651307    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:20.735190    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:20.735294    6576 retry.go:31] will retry after 27.405422909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.034778    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:22.060808    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:22.093037    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.093111    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:22.097193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:22.124988    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.125036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:22.128496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:22.157896    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.157947    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:22.161826    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:22.190808    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.190839    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:22.194900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:22.227226    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.227346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:22.230966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:22.260811    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.260861    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:22.264784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:22.295222    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.295331    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:22.302135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:22.343045    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.343116    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:22.343116    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:22.343116    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:22.394026    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:22.394026    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:22.457078    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:22.457078    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:22.498385    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:22.498434    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:22.581112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:22.581112    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:22.581112    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:22.780060    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:05:22.859804    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.859804    6576 retry.go:31] will retry after 21.036491608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:25.113006    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:25.148820    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:25.186604    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.186604    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:25.191401    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:25.223786    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.223867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:25.227359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:25.262253    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.262310    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:25.266030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:25.298397    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.298433    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:25.303771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:25.334112    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.334112    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:25.338565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:25.370125    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.370206    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:25.374513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:25.406130    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.406219    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:25.410417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:25.442663    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.442742    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:25.442742    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:25.442742    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:25.479786    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:25.479786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:25.573308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:25.573308    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:25.573308    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:25.599667    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:25.599667    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:25.650617    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:25.650617    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.218354    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:28.243705    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:28.279022    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.279022    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:28.283525    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:28.313798    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.313798    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:28.318172    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:28.347700    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.347700    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:28.351701    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:28.381257    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.381341    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:28.384917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:28.416041    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.416041    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:28.419541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:28.447349    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.447349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:28.451684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:28.479275    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.479307    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:28.483095    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:28.511115    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.511187    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:28.511187    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:28.511237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.574706    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:28.574706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:28.615541    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:28.615541    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:28.709604    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:28.709604    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:28.709604    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:28.738815    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:28.738815    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.300476    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:31.328202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:31.357921    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.357958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:31.361905    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:31.390844    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.390926    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:31.395488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:31.426488    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.426570    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:31.430048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:31.461632    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.461687    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:31.465105    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:31.492594    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.492657    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:31.496042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:31.523806    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.523834    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:31.527758    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:31.557959    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.558020    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:31.561776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:31.588451    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.588485    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:31.588513    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:31.588535    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:31.675984    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:31.675984    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:31.675984    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:31.706483    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:31.706567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.753154    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:31.753677    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:31.813379    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:31.813379    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.359731    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:34.386737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:34.416273    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.416306    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:34.419220    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:34.452145    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.452661    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:34.456139    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:34.486541    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.486593    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:34.489738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:34.520642    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.520642    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:34.524007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:34.556848    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.556848    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:34.560551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:34.589976    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.589976    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:34.594061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:34.623871    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.623871    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:34.627661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:34.655428    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.655428    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:34.655428    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:34.655428    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.693248    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:34.693248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:34.782095    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:34.782095    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:34.782095    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:34.809243    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:34.809243    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:34.859486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:34.859486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.427533    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:37.454695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:37.485702    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.485702    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:37.489329    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:37.522074    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.522074    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:37.525283    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:37.555534    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.555534    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:37.559473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:37.589923    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.589923    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:37.593340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:37.625230    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.625230    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:37.628764    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:37.658722    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.658722    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:37.661870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:37.693003    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.693003    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:37.696992    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:37.726216    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.726286    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:37.726286    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:37.726333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.791305    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:37.791305    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:37.829600    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:37.829600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:37.920892    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:37.920892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:37.920892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:37.947989    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:37.947989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:40.501988    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:40.527784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:40.563590    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.563590    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:40.567375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:40.598332    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.598332    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:40.602019    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:40.629289    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.629289    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:40.633378    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:40.660574    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.660630    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:40.664275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:40.691063    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.691063    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:40.694694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:40.723611    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.723667    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:40.726975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:40.755155    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.755155    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:40.759134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:40.793723    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.793723    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:40.793723    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:40.793723    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:40.831198    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:40.831198    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:40.925587    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:40.925587    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:40.925587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:40.954081    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:40.954114    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:41.007048    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:41.007096    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:43.582160    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:43.607539    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:43.638277    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.638277    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:43.642375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:43.675099    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.675099    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:43.678089    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:43.706803    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.706803    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:43.713114    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:43.740522    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.740522    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:43.744411    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:43.773724    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.773780    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:43.777763    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:43.803962    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.803962    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:43.807698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:43.839559    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.839559    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:43.843918    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:43.876174    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.876252    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:43.876252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:43.876252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:43.902671    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:05:43.934973    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:43.934973    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 08:05:43.999146    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:43.999146    6576 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:44.032735    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:44.033740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:44.075384    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:44.075384    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:44.157223    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:44.157223    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:44.157223    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:46.691333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:46.717072    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:46.748595    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.748595    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:46.752218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:46.780374    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.780374    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:46.783922    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:46.815066    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.815066    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:46.818942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:46.847510    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.847563    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:46.851012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:46.883362    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.883465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:46.886941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:46.916379    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.916451    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:46.920641    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:46.949114    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.949114    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:46.953549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:46.983164    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.983164    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:46.983164    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:46.983164    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:47.022255    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:47.022255    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:47.111784    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:47.111860    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:47.111860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:47.138559    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:47.138559    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:47.188823    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:47.189346    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:48.147422    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:48.239875    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:48.239875    6576 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:48.242898    6576 out.go:179] * Enabled addons: 
	I1205 08:05:48.245836    6576 addons.go:530] duration metric: took 1m45.1017438s for enable addons: enabled=[]
	I1205 08:05:49.757493    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:49.785573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:49.818757    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.818757    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:49.822359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:49.849919    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.849919    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:49.853892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:49.881451    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.881451    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:49.884508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:49.916549    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.916599    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:49.922025    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:49.955857    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.955857    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:49.959871    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:49.992747    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.992747    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:49.997745    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:50.027985    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.027985    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:50.032696    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:50.066315    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.066315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:50.066315    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:50.066315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:50.162764    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:50.162764    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:50.162764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:50.190807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:50.190807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:50.244357    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:50.244357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:50.306832    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:50.306832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:52.850828    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:52.881404    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:52.914164    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.914164    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:52.919056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:52.946339    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.946339    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:52.950249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:52.977159    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.977159    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:52.981587    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:53.011126    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.011126    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:53.016170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:53.050900    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.050900    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:53.055929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:53.086492    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.086492    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:53.091422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:53.123587    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.123587    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:53.126586    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:53.155525    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.155525    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:53.155525    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:53.155525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:53.220198    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:53.221197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:53.261683    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:53.261683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:53.355432    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:53.355432    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:53.355432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:53.386521    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:53.386521    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:55.947613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:55.973795    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:56.007916    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.007916    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:56.011792    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:56.045094    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.045094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:56.048513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:56.082501    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.082501    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:56.086603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:56.116918    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.117005    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:56.120916    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:56.150716    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.150716    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:56.154101    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:56.186882    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.186882    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:56.190500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:56.223741    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.223741    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:56.227290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:56.255902    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.255902    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:56.255902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:56.255902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:56.285180    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:56.285180    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:56.333650    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:56.333650    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:56.393332    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:56.393332    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:56.432841    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:56.432841    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:56.521419    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.025923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:59.056473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:59.091893    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.091909    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:59.095650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:59.128079    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.128185    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:59.131611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:59.159655    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.159655    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:59.163348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:59.192422    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.192422    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:59.196339    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:59.226737    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.226737    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:59.230776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:59.258194    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.258194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:59.261784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:59.292592    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.292592    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:59.296370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:59.323764    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.323764    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:59.323764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:59.323764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:59.375689    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:59.376207    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:59.440586    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:59.440586    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:59.479856    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:59.479856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:59.578161    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.578161    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:59.578161    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.111153    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:02.137611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:02.172231    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.172231    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:02.176271    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:02.208274    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.208274    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:02.211990    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:02.244184    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.244245    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:02.247661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:02.278388    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.278388    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:02.282228    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:02.312290    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.312290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:02.316470    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:02.345487    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.345487    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:02.349444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:02.378305    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.378305    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:02.381923    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:02.409737    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.409737    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:02.409737    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:02.409737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:02.477029    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:02.477029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:02.517422    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:02.517422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:02.605249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:02.605249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:02.605249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.632767    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:02.632828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:05.196182    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:05.221488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:05.251281    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.251355    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:05.254854    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:05.284103    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.284103    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:05.288076    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:05.315552    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.315552    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:05.319409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:05.347664    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.347664    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:05.351387    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:05.382685    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.382685    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:05.386801    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:05.416816    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.416816    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:05.421471    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:05.451265    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.451350    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:05.455129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:05.486455    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.486455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:05.486455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:05.486455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:05.548252    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:05.548252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:05.586103    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:05.586103    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:05.689902    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:05.689902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:05.689902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:05.715463    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:05.715463    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:08.298546    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:08.325694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:08.358357    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.358427    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:08.362535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:08.393631    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.393631    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:08.397365    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:08.429162    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.429162    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:08.433444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:08.464672    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.464672    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:08.467810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:08.496450    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.496450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:08.499640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:08.526246    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.526246    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:08.530507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:08.558130    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.558130    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:08.561856    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:08.590753    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.590753    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:08.590753    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:08.590753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:08.656049    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:08.656049    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:08.697268    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:08.697268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:08.794510    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:08.794510    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:08.794510    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:08.839662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:08.839734    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:11.394677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:11.423727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:11.453346    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.453346    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:11.460955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:11.498834    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.498834    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:11.498834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:11.532657    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.532657    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:11.540987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:11.575759    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.575786    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:11.579561    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:11.612047    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.612102    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:11.615579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:11.644318    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.644370    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:11.648326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:11.678026    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.678026    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:11.681899    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:11.711631    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.711631    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:11.711631    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:11.711631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:11.772905    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:11.772905    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:11.814639    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:11.814639    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:11.905607    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:11.905657    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:11.905700    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:11.934717    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:11.935238    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:14.488836    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:14.512857    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:14.546571    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.546571    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:14.549903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:14.580887    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.580887    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:14.584967    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:14.630312    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.630312    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:14.633809    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:14.667373    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.667373    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:14.671026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:14.699813    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.699813    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:14.703177    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:14.734619    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.734619    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:14.739056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:14.769129    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.769129    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:14.773030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:14.803689    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.803689    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:14.803689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:14.803689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:14.841923    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:14.841923    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:14.932570    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:14.932570    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:14.932570    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:14.961067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:14.961591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:15.010912    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:15.010953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:17.575458    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:17.603741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:17.636367    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.636367    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:17.640529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:17.668380    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.668380    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:17.672111    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:17.700544    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.700544    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:17.704634    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:17.736823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.736823    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:17.741002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:17.770125    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.770125    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:17.775816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:17.812823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.812823    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:17.815683    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:17.844895    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.844895    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:17.849115    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:17.880706    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.880706    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:17.880706    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:17.880706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:17.969171    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:17.969171    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:17.969263    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:17.995396    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:17.995396    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:18.044466    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:18.044466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:18.105721    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:18.105721    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:20.651671    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:20.679273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:20.707727    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.707727    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:20.711373    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:20.741891    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.741891    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:20.746073    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:20.777260    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.777260    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:20.780520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:20.816982    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.816982    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:20.820520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:20.850461    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.850461    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:20.854205    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:20.882429    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.882429    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:20.886920    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:20.914179    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.914179    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:20.917831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:20.949708    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.949708    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:20.949708    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:20.949708    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:21.013967    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:21.013967    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:21.053946    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:21.053946    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:21.140482    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:21.141002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:21.141002    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:21.170239    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:21.170239    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:23.729627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:23.758686    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:23.791537    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.791594    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:23.796131    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:23.827894    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.827894    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:23.832419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:23.862718    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.862718    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:23.867837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:23.896272    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.896272    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:23.900193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:23.929016    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.929078    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:23.932778    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:23.962372    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.962447    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:23.966147    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:23.998472    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.998472    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:24.004351    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:24.033564    6576 logs.go:282] 0 containers: []
	W1205 08:06:24.033564    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:24.033564    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:24.033564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:24.099505    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:24.099505    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:24.139900    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:24.139900    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:24.233474    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:24.233474    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:24.233474    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:24.263408    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:24.263408    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:26.816321    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:26.841457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:26.872936    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.872992    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:26.876345    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:26.908512    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.908580    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:26.912736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:26.944068    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.944068    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:26.947603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:26.975323    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.975360    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:26.978941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:27.008708    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.008751    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:27.012371    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:27.044160    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.044225    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:27.047780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:27.078172    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.078172    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:27.081803    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:27.111287    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.111370    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:27.111370    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:27.111435    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:27.161265    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:27.161329    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:27.221473    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:27.221473    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:27.263907    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:27.263907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:27.357876    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:27.357876    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:27.357876    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:29.890252    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:29.916690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:29.946274    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.946274    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:29.950679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:29.979149    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.979149    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:29.982229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:30.010085    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.010085    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:30.014016    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:30.043254    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.043254    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:30.048048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:30.080613    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.080613    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:30.084300    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:30.114627    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.114627    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:30.118584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:30.147947    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.148009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:30.151166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:30.180743    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.180828    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:30.180828    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:30.180828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:30.244646    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:30.244646    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:30.286079    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:30.286079    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:30.376557    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:30.376557    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:30.376557    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:30.405737    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:30.405737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:32.958550    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:32.987728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:33.018308    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.018370    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:33.022062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:33.052435    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.052435    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:33.056434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:33.085355    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.085426    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:33.089343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:33.121676    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.121737    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:33.125504    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:33.157765    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.157765    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:33.161892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:33.191061    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.191061    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:33.194930    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:33.223173    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.223173    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:33.226650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:33.257481    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.257481    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:33.257481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:33.257481    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:33.301467    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:33.301467    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:33.389528    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:33.389528    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:33.389528    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:33.418631    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:33.418631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:33.465106    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:33.465185    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.034296    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:36.063459    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:36.095210    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.095210    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:36.098565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:36.127708    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.127786    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:36.131615    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:36.159964    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.159964    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:36.163771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:36.192604    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.192604    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:36.196679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:36.224877    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.224958    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:36.228553    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:36.258280    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.258280    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:36.261911    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:36.294140    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.294140    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:36.298273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:36.329657    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.329657    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:36.329657    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:36.329657    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:36.387784    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:36.387784    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.452385    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:36.452385    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:36.493394    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:36.493394    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:36.591485    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:36.591485    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:36.591567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.124474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:39.152578    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:39.183392    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.183392    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:39.187028    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:39.216193    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.216193    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:39.219743    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:39.251680    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.251759    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:39.255869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:39.283843    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.283843    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:39.287237    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:39.316021    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.316021    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:39.319015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:39.349194    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.349194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:39.352951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:39.403729    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.403729    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:39.411012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:39.442909    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.442909    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:39.442909    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:39.442909    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:39.509174    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:39.509174    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:39.550483    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:39.550483    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:39.650354    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:39.650354    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:39.650354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.676786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:39.676786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.228069    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:42.258786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:42.290791    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.290791    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:42.294739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:42.326094    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.326094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:42.329725    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:42.356052    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.356052    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:42.359752    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:42.390464    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.390464    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:42.393935    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:42.421882    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.421882    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:42.426609    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:42.457036    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.457036    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:42.460988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:42.486064    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.486064    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:42.491250    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:42.521748    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.521748    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:42.521748    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:42.521748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:42.551195    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:42.552197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.613626    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:42.613683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:42.678856    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:42.679856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:42.719297    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:42.719297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:42.811034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:45.316640    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:45.343574    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:45.372899    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.372899    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:45.376229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:45.408264    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.408264    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:45.412119    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:45.440697    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.440697    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:45.444501    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:45.471692    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.471727    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:45.475496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:45.508400    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.508450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:45.512541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:45.544177    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.544233    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:45.548858    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:45.579165    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.579165    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:45.582164    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:45.623052    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.623052    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:45.623052    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:45.623052    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:45.651554    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:45.651554    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:45.701716    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:45.701768    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:45.766248    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:45.766248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:45.806341    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:45.806341    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:45.895675    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.401571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:48.432481    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:48.466418    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.466418    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:48.471424    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:48.503617    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.503617    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:48.507677    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:48.541480    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.541480    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:48.547529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:48.579177    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.579177    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:48.585087    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:48.626465    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.626465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:48.630533    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:48.660304    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.660304    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:48.663999    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:48.694957    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.694957    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:48.699665    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:48.725908    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.725908    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:48.725908    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:48.725908    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:48.817395    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.817466    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:48.817466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:48.848226    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:48.848739    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:48.900060    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:48.900060    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:48.962797    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:48.962797    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.508647    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:51.536278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:51.573226    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.573323    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:51.578061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:51.614603    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.614603    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:51.619576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:51.647095    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.647095    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:51.652535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:51.680320    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.680369    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:51.684269    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:51.717798    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.717827    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:51.721877    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:51.750482    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.750482    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:51.754602    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:51.786216    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.786216    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:51.790834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:51.819030    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.819030    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:51.819030    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:51.819030    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:51.876069    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:51.876110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:51.938469    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:51.938469    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.980953    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:51.980953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:52.079938    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:52.079938    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:52.079938    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:54.616891    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:54.642146    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:54.675691    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.675691    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:54.679440    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:54.709522    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.709522    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:54.713343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:54.744053    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.744112    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:54.748148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:54.782163    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.782232    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:54.786128    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:54.817067    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.817067    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:54.820867    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:54.850003    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.850003    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:54.854439    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:54.882517    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.882566    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:54.886475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:54.917057    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.917057    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:54.917057    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:54.917057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:54.982333    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:54.982333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:55.023534    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:55.023534    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:55.136747    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:55.136823    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:55.136823    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:55.169237    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:55.169237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:57.723958    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:57.750382    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:57.784932    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.784932    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:57.788837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:57.815350    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.815350    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:57.819773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:57.850513    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.850513    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:57.854585    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:57.885405    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.885405    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:57.889340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:57.917143    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.917143    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:57.921061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:57.947843    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.947843    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:57.951577    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:57.983169    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.983169    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:57.986925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:58.016381    6576 logs.go:282] 0 containers: []
	W1205 08:06:58.016381    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:58.016381    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:58.016381    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:58.081766    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:58.081766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:58.122021    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:58.122021    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:58.216654    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:58.216654    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:58.216654    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:58.245369    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:58.245369    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:00.814255    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:00.841335    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:00.870336    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.870336    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:00.874294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:00.905321    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.905321    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:00.908814    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:00.940896    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.940896    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:00.944651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:00.975783    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.975855    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:00.979485    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:01.007166    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.007166    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:01.011052    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:01.038708    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.038708    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:01.043766    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:01.072944    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.072944    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:01.076562    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:01.104574    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.104623    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:01.104665    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:01.104665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:01.169748    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:01.169748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:01.210259    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:01.210259    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:01.310310    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:01.310310    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:01.310310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:01.336589    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:01.336589    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:03.889510    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:03.919078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:03.953291    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.953291    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:03.956276    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:03.986975    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.986975    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:03.991157    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:04.022935    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.022935    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:04.026117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:04.058273    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.058312    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:04.061868    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:04.093136    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.093136    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:04.096666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:04.122322    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.122349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:04.126167    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:04.158513    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.158545    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:04.161969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:04.190492    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.190569    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:04.190569    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:04.190569    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:04.259062    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:04.259062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:04.299558    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:04.299558    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:04.393556    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:04.393644    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:04.393644    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:04.420122    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:04.420122    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:06.976110    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:07.001980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:07.033975    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.033975    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:07.040090    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:07.069823    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.069823    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:07.074015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:07.103072    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.103072    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:07.107448    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:07.138770    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.138770    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:07.142987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:07.174660    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.174660    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:07.178913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:07.209719    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.209719    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:07.215472    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:07.243539    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.243539    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:07.248737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:07.279448    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.279448    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:07.279448    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:07.279448    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:07.345481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:07.346489    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:07.384275    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:07.384275    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:07.479588    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:07.479588    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:07.479588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:07.506786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:07.506786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:10.078099    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:10.103951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:10.139034    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.139034    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:10.142691    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:10.174629    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.174629    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:10.178323    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:10.206817    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.206817    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:10.210968    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:10.239729    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.239820    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:10.245043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:10.277712    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.277712    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:10.283741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:10.315362    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.315362    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:10.318268    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:10.346693    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.346693    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:10.350670    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:10.379081    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.379081    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:10.379081    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:10.379081    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:10.443299    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:10.443299    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:10.482497    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:10.482497    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:10.567024    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:10.567024    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:10.567024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:10.596635    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:10.596635    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:13.157670    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:13.186965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:13.222698    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.222730    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:13.226690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:13.261914    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.261957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:13.265780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:13.294590    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.294590    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:13.299066    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:13.329216    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.329216    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:13.334474    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:13.366263    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.366290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:13.369870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:13.398379    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.398379    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:13.402396    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:13.430465    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.430465    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:13.434253    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:13.462873    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.462905    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:13.462905    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:13.462949    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:13.525954    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:13.526955    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:13.566284    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:13.567284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:13.656971    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:13.656971    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:13.656971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:13.684284    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:13.684284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.241440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:16.268513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:16.302653    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.302653    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:16.306429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:16.337387    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.337387    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:16.342004    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:16.371449    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.371449    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:16.376376    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:16.406912    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.406912    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:16.410777    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:16.438875    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.438875    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:16.442983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:16.470299    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.470299    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:16.474336    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:16.504067    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.504067    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:16.508174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:16.536869    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.536869    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:16.536869    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:16.536869    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:16.624673    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:16.624703    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:16.624755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:16.653894    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:16.653894    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.701985    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:16.701985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:16.763148    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:16.763148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.307232    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:19.334513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:19.371034    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.371140    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:19.375038    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:19.403110    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.403186    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:19.407168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:19.435904    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.435904    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:19.440294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:19.470700    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.470700    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:19.474611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:19.502846    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.502915    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:19.506400    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:19.540483    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.540483    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:19.544695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:19.576470    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.576501    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:19.579834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:19.609587    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.609587    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:19.609587    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:19.609587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.653000    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:19.653000    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:19.747787    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:19.747787    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:19.747787    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:19.774804    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:19.774804    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:19.825222    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:19.825338    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.394074    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:22.419163    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:22.454202    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.454202    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:22.457716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:22.487462    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.487615    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:22.491427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:22.522398    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.522398    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:22.526148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:22.554536    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.554536    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:22.558447    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:22.590329    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.590401    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:22.595088    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:22.626553    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.626553    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:22.630372    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:22.658911    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.658911    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:22.662715    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:22.692369    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.692444    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:22.692468    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:22.692468    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.759391    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:22.759391    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:22.801415    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:22.801415    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:22.891643    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:22.891710    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:22.891738    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:22.922662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:22.922662    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:25.480645    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:25.506403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:25.536534    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.536600    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:25.540233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:25.568373    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.568373    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:25.572581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:25.604196    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.604196    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:25.608476    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:25.639923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.640007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:25.643813    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:25.673923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.673923    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:25.677542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:25.709156    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.709156    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:25.712910    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:25.744371    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.744371    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:25.750463    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:25.778113    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.778113    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:25.778113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:25.778113    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:25.842953    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:25.842953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:25.881310    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:25.881310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:25.976920    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:25.976920    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:25.976920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:26.005828    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:26.005889    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:28.568522    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:28.594981    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:28.628025    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.628025    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:28.631569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:28.661047    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.661047    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:28.664662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:28.692667    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.692667    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:28.696624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:28.725878    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.725944    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:28.730056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:28.758073    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.758129    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:28.761794    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:28.788812    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.788812    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:28.793030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:28.839778    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.839778    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:28.843937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:28.873288    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.873288    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:28.873288    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:28.873288    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:28.937414    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:28.937414    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:28.975610    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:28.975610    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:29.110286    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:29.110286    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:29.110286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:29.140120    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:29.140120    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:31.695315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:31.723717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:31.755093    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.755155    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:31.758672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:31.786260    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.786260    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:31.790917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:31.817450    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.817450    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:31.822438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:31.852769    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.852788    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:31.856218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:31.885715    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.885715    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:31.890036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:31.919240    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.919240    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:31.924888    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:31.956860    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.956860    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:31.960848    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:31.989055    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.989055    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:31.989055    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:31.989055    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:32.055751    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:32.055751    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:32.091848    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:32.091848    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:32.183494    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:32.183494    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:32.183494    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:32.211020    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:32.211056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:34.770702    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:34.796134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:34.830020    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.830052    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:34.833506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:34.860829    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.860829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:34.864718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:34.895302    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.895302    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:34.899305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:34.928933    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.928933    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:34.935599    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:34.964256    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.964280    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:34.967945    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:34.995571    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.995571    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:35.001155    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:35.038603    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.038603    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:35.042249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:35.075025    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.075025    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:35.075025    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:35.075025    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:35.136020    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:35.136020    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:35.198233    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:35.198233    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:35.236713    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:35.236713    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:35.327635    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:35.327659    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:35.327659    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:37.859618    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:37.890074    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:37.922724    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.922724    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:37.926571    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:37.959720    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.959720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:37.963770    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:37.991602    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.991602    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:37.995673    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:38.023771    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.023771    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:38.030170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:38.061676    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.061676    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:38.065660    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:38.116492    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.116542    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:38.122475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:38.151483    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.151483    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:38.155624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:38.184512    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.184512    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:38.184512    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:38.184512    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:38.221972    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:38.221972    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:38.315283    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:38.315283    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:38.315283    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:38.342209    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:38.342209    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:38.391392    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:38.391470    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:40.955418    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:40.982062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:41.015938    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.015938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:41.019996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:41.049917    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.049917    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:41.052925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:41.084946    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.084946    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:41.088068    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:41.120218    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.120297    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:41.123688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:41.152948    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.152948    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:41.156508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:41.183795    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.183795    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:41.187681    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:41.217097    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.217097    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:41.221130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:41.252354    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.252354    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:41.252354    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:41.252354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:41.345903    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:41.345903    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:41.345903    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:41.373149    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:41.373149    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:41.423553    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:41.423553    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:41.485144    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:41.485144    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.029139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:44.056384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:44.087995    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.088078    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:44.091865    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:44.118934    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.118934    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:44.122494    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:44.150822    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.150864    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:44.154454    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:44.183401    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.183401    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:44.187086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:44.214588    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.214644    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:44.217896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:44.249548    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.249548    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:44.253290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:44.281230    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.281230    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:44.284996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:44.314362    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.314426    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:44.314426    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:44.314426    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:44.378166    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:44.378166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.420024    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:44.420024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:44.510942    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:44.510942    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:44.510942    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:44.539432    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:44.539482    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:47.095962    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:47.121976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:47.155042    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.155042    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:47.159040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:47.188768    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.188768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:47.192847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:47.220500    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.220500    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:47.224299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:47.252483    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.252483    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:47.256264    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:47.285852    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.285852    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:47.290573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:47.319383    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.319450    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:47.323007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:47.353203    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.353203    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:47.357241    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:47.385498    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.385498    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:47.385498    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:47.385498    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:47.449686    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:47.449686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:47.490407    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:47.490407    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:47.577868    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:47.577868    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:47.577868    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:47.604652    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:47.604652    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:50.157279    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:50.184328    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:50.218852    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.218852    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:50.222438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:50.250551    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.250571    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:50.254169    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:50.285371    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.285424    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:50.289741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:50.320093    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.320093    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:50.323845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:50.357038    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.357084    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:50.360291    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:50.389753    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.389829    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:50.392859    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:50.423710    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.423710    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:50.427343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:50.454456    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.454456    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:50.454456    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:50.454456    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:50.516581    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:50.516581    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:50.555412    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:50.555412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:50.648402    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:50.648402    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:50.648402    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:50.673701    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:50.673701    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:53.230542    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:53.256707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:53.290781    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.290781    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:53.294254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:53.326261    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.326261    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:53.329838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:53.359630    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.359630    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:53.364896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:53.396046    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.396046    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:53.400120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:53.428713    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.428713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:53.432409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:53.462479    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.462479    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:53.467583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:53.495306    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.495306    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:53.499565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:53.530622    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.530622    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:53.530622    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:53.530622    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:53.593183    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:53.593183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:53.633807    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:53.633807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:53.721016    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:53.721016    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:53.721016    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:53.748333    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:53.748442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.315862    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:56.341452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:56.374032    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.374063    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:56.377843    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:56.408635    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.408698    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:56.412330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:56.442083    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.442083    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:56.445380    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:56.473679    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.473749    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:56.477263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:56.506107    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.506156    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:56.510975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:56.538958    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.539022    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:56.542581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:56.572303    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.572303    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:56.576375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:56.604073    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.604073    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:56.604073    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:56.604145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:56.641552    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:56.641552    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:56.734944    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:56.735002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:56.735046    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:56.770367    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:56.770412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.826378    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:56.826378    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.393300    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:59.417617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:59.452220    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.452220    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:59.456092    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:59.484787    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.484787    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:59.488348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:59.516670    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.516670    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:59.521214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:59.548048    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.548048    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:59.551862    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:59.576869    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.576869    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:59.581825    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:59.610579    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.610579    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:59.614523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:59.642507    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.642507    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:59.646397    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:59.675062    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.675062    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:59.675062    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:59.675062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.739704    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:59.739704    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:59.782363    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:59.782363    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:59.876076    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:59.876076    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:59.876076    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:59.903005    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:59.903005    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:02.456978    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:02.483895    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:02.516374    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.516374    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:02.520443    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:02.553066    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.553148    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:02.556844    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:02.585220    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.585220    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:02.589183    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:02.620655    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.620655    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:02.625389    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:02.659292    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.659369    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:02.662727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:02.690972    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.690972    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:02.694944    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:02.723751    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.723797    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:02.727357    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:02.764750    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.764750    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:02.764750    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:02.764750    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:02.834733    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:02.834733    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:02.873432    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:02.873432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:02.963503    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:02.963503    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:02.963503    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:02.992067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:02.992067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:05.547340    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:05.572946    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:05.605473    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.605473    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:05.609479    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:05.639072    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.639072    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:05.642702    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:05.674126    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.674174    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:05.678318    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:05.710378    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.710378    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:05.713988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:05.743263    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.743263    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:05.748802    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:05.777467    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.777467    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:05.781993    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:05.816147    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.816147    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:05.820044    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:05.849173    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.849173    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:05.849173    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:05.849173    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:05.937771    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:05.937771    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:05.937771    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:05.965110    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:05.965110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:06.012927    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:06.012927    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:06.076287    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:06.076287    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:08.621402    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:08.647297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:08.678598    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.678679    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:08.681866    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:08.710779    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.710856    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:08.714554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:08.745379    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.745379    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:08.750135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:08.785796    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.785840    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:08.791900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:08.823728    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.823778    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:08.827659    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:08.858652    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.858726    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:08.862304    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:08.893238    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.893287    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:08.896783    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:08.927578    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.927578    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:08.927578    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:08.927578    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:08.990752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:08.990752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:09.030509    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:09.030509    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:09.116112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:09.116629    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:09.116629    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:09.148307    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:09.148307    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:11.720341    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:11.750190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:11.784223    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.784247    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:11.789837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:11.819184    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.819184    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:11.824438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:11.852058    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.852058    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:11.857984    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:11.888391    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.888391    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:11.891707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:11.921973    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.921973    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:11.925426    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:11.953845    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.953845    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:11.957863    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:11.987150    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.987236    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:11.990921    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:12.018843    6576 logs.go:282] 0 containers: []
	W1205 08:08:12.018895    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:12.018895    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:12.018918    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:12.048523    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:12.048523    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:12.099490    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:12.099490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:12.163368    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:12.163368    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:12.204867    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:12.204867    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:12.290894    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:14.795945    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:14.821749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:14.851399    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.851399    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:14.855010    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:14.887370    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.887370    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:14.891117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:14.922139    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.922139    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:14.926245    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:14.954095    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.954095    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:14.959551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:14.987564    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.987564    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:14.991080    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:15.023941    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.023941    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:15.027344    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:15.056411    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.056474    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:15.059417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:15.092400    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.092400    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:15.092400    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:15.092400    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:15.119932    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:15.119932    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:15.169067    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:15.169067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:15.232603    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:15.232603    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:15.276106    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:15.276106    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:15.363421    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:17.870108    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:17.895889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:17.927528    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.927528    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:17.931166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:17.959105    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.959105    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:17.962846    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:17.994011    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.994011    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:17.998047    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:18.026606    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.026677    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:18.030234    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:18.061389    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.061389    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:18.065290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:18.096454    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.096454    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:18.100320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:18.129213    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.129213    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:18.133040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:18.160088    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.160111    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:18.160111    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:18.160111    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:18.221228    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:18.221228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:18.258886    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:18.258886    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:18.348416    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:18.348496    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:18.348525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:18.379855    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:18.379855    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:20.936239    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:20.959002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:20.990013    6576 logs.go:282] 0 containers: []
	W1205 08:08:20.990085    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:20.993773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:21.021884    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.021925    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:21.025964    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:21.054531    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.054531    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:21.058277    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:21.088997    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.089078    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:21.092631    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:21.121326    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.121360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:21.125135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:21.160429    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.160496    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:21.164226    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:21.192488    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.192557    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:21.196294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:21.228406    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.228445    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:21.228445    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:21.228495    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:21.291604    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:21.292600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:21.331218    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:21.331218    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:21.412454    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:21.412454    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:21.412454    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:21.441164    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:21.441229    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:23.994395    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:24.020275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:24.054682    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.054682    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:24.058674    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:24.089654    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.089654    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:24.093569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:24.123224    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.123224    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:24.127942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:24.155350    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.155350    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:24.159192    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:24.192652    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.192652    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:24.197194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:24.229851    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.229851    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:24.233957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:24.262158    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.262158    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:24.266478    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:24.297683    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.297766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:24.297766    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:24.297766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:24.388464    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:24.388464    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:24.388464    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:24.416764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:24.416764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:24.468678    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:24.469203    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:24.532678    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:24.532678    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.075175    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:27.104797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:27.137440    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.137440    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:27.141581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:27.171103    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.171126    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:27.174625    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:27.205068    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.205102    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:27.208711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:27.237765    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.237806    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:27.241719    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:27.269838    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.269838    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:27.273353    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:27.300835    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.300835    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:27.304633    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:27.333062    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.333062    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:27.338523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:27.366572    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.366572    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:27.366572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:27.366572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.402514    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:27.402514    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:27.499452    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:27.499452    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:27.499452    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:27.528089    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:27.528089    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:27.596881    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:27.596881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.168154    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:30.194986    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:30.228709    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.228709    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:30.233961    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:30.268256    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.268256    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:30.271667    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:30.300456    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.300519    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:30.303870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:30.335955    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.335955    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:30.339590    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:30.367829    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.367829    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:30.373123    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:30.401294    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.401327    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:30.404974    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:30.436526    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.436526    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:30.440246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:30.478544    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.478599    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:30.478599    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:30.478651    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.544716    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:30.544716    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:30.584496    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:30.584496    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:30.671308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:30.671352    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:30.671352    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:30.699029    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:30.699029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:33.251744    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:33.280500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:33.311912    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.311912    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:33.316407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:33.347966    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.347966    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:33.351341    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:33.386249    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.386249    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:33.389828    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:33.420571    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.420571    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:33.423584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:33.450599    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.450599    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:33.453949    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:33.488480    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.488480    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:33.492797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:33.523382    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.523382    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:33.526929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:33.561860    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.561860    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:33.561860    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:33.561860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:33.628425    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:33.628425    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:33.666453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:33.666453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:33.756872    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:33.756872    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:33.756872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:33.785780    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:33.785780    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:36.342322    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:36.368238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:36.399529    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.399529    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:36.402710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:36.430561    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.430561    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:36.434233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:36.461894    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.461894    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:36.466270    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:36.492354    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.492354    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:36.495668    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:36.526818    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.526818    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:36.530606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:36.564752    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.564752    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:36.569130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:36.598403    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.598403    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:36.603579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:36.635757    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.635757    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:36.635757    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:36.635757    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:36.702715    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:36.702715    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:36.740740    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:36.740740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:36.827779    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:36.827779    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:36.827779    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:36.855113    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:36.855148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.404078    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:39.428626    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:39.461540    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.461540    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:39.465369    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:39.497259    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.497368    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:39.501168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:39.532526    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.532526    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:39.537388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:39.570114    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.570114    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:39.574332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:39.607392    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.607392    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:39.611100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:39.640933    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.640933    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:39.644381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:39.673224    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.673224    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:39.678235    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:39.706766    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.706766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:39.706766    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:39.706766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:39.734527    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:39.734527    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.787138    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:39.787138    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:39.849637    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:39.849637    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:39.889331    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:39.889331    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:39.977390    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:42.481792    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:42.508550    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:42.541632    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.541632    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:42.545635    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:42.595829    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.595829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:42.601196    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:42.630888    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.630888    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:42.634929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:42.665451    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.665451    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:42.668581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:42.701244    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.701244    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:42.705368    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:42.737250    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.737250    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:42.740441    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:42.766622    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.766700    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:42.770278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:42.801486    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.801486    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:42.801486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:42.801486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:42.866794    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:42.866930    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:42.906819    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:42.906819    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:43.000226    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:43.000226    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:43.000226    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:43.027011    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:43.027011    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:45.586794    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:45.615024    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:45.642666    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.642666    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:45.646348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:45.675867    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.675867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:45.679650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:45.711785    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.711785    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:45.717449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:45.750065    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.750109    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:45.753406    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:45.782908    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.782908    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:45.786362    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:45.816309    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.816309    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:45.819889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:45.847629    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.847656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:45.850622    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:45.880676    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.880733    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:45.880759    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:45.880759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:45.943843    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:45.943843    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:45.984212    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:45.984212    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:46.071821    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:46.071821    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:46.071821    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:46.098280    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:46.098280    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:48.651285    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:48.676952    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:48.706696    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.706696    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:48.710427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:48.738766    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.738766    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:48.746145    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:48.773486    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.773486    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:48.778542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:48.805908    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.805908    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:48.809817    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:48.840360    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.840360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:48.843723    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:48.871560    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.871560    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:48.875316    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:48.903556    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.903556    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:48.908924    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:48.938455    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.938455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:48.938455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:48.938455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:49.001951    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:49.001951    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:49.042098    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:49.042098    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:49.131350    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:49.131350    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:49.131350    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:49.166759    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:49.166759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:51.724851    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:51.752650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:51.780528    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.780542    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:51.784422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:51.816577    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.816577    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:51.819989    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:51.849244    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.849244    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:51.853211    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:51.881159    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.881222    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:51.884831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:51.917237    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.917237    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:51.921202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:51.951018    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.951018    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:51.955222    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:51.982262    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.982262    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:51.986170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:52.013482    6576 logs.go:282] 0 containers: []
	W1205 08:08:52.013526    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:52.013564    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:52.013564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:52.050334    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:52.050334    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:52.144178    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:52.144178    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:52.144178    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:52.171135    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:52.171135    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:52.223993    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:52.223993    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:54.792613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:54.817042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:54.848768    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.848768    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:54.852580    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:54.881045    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.881045    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:54.885194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:54.915368    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.915368    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:54.919753    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:54.952592    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.952679    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:54.956477    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:54.989304    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.989357    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:54.992976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:55.025855    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.025855    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:55.029407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:55.059218    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.059290    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:55.063529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:55.092992    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.092992    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:55.092992    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:55.092992    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:55.201249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:55.201249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:55.201249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:55.228877    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:55.228907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:55.286872    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:55.286872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:55.357844    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:55.357844    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:57.912434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:57.938621    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:57.968927    6576 logs.go:282] 0 containers: []
	W1205 08:08:57.968927    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:57.975548    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:58.003200    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.003200    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:58.006983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:58.037886    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.037886    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:58.041594    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:58.072037    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.072037    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:58.076711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:58.118201    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.118201    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:58.122059    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:58.150468    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.150468    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:58.154554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:58.186009    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.186009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:58.189676    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:58.219204    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.219204    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:58.219204    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:58.219204    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:58.283572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:58.283572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:58.322291    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:58.322291    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:58.406023    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:58.406023    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:58.406023    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:58.434361    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:58.434881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:00.986031    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:01.012520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:01.041860    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.041860    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:01.045736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:01.074168    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.074168    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:01.081136    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:01.115160    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.115160    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:01.121214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:01.152200    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.152200    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:01.155786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:01.187849    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.187849    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:01.193651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:01.220927    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.220927    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:01.225251    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:01.262648    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.262648    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:01.266549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:01.298388    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.298388    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:01.298459    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:01.298491    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:01.389098    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:01.389126    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:01.389126    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:01.418232    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:01.418232    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:01.463083    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:01.463083    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:01.528159    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:01.528159    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.078505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:04.106462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:04.136412    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.136412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:04.139845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:04.168393    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.168465    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:04.171965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:04.203281    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.203281    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:04.207129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:04.235244    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.235244    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:04.239720    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:04.271746    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.271746    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:04.279903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:04.308486    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.308486    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:04.312482    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:04.341988    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.341988    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:04.345122    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:04.378152    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.378152    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:04.378152    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:04.378152    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:04.443403    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:04.443403    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.484661    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:04.484661    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:04.574793    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:04.574793    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:04.574793    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:04.606357    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:04.606357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.162554    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:07.194738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:07.227905    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.227977    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:07.232048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:07.262861    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.262861    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:07.266595    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:07.297184    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.297184    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:07.300873    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:07.331523    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.331523    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:07.335838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:07.367893    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.367893    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:07.371282    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:07.400934    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.400934    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:07.403928    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:07.431616    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.431616    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:07.435314    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:07.469043    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.469043    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:07.469043    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:07.469043    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:07.497832    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:07.497832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.547846    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:07.547846    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:07.611682    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:07.611682    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:07.651105    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:07.651105    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:07.741756    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.247138    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:10.275755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:10.311911    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.311911    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:10.317436    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:10.347243    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.347243    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:10.353296    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:10.384412    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.384412    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:10.389236    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:10.419505    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.419505    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:10.423688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:10.451213    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.451213    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:10.457390    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:10.485001    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.485001    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:10.488370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:10.519268    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.519268    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:10.524029    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:10.551544    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.551544    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:10.551544    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:10.551544    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:10.618971    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:10.618971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:10.657753    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:10.657753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:10.751422    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.751422    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:10.751422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:10.777901    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:10.778003    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.340867    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:13.373007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:13.404147    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.404191    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:13.408078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:13.440768    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.440768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:13.444748    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:13.474390    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.474390    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:13.478381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:13.508004    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.508057    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:13.511749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:13.543789    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.543789    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:13.547384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:13.576308    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.576377    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:13.579736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:13.609792    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.609792    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:13.613298    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:13.642091    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.642091    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:13.642091    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:13.642091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:13.671624    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:13.671686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.718995    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:13.718995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:13.782056    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:13.782056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:13.821453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:13.821453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:13.928916    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.433905    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:16.459887    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:16.496160    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.496160    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:16.499639    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:16.526877    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.526877    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:16.530750    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:16.560261    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.560261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:16.563991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:16.595914    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.595914    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:16.599869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:16.627694    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.627694    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:16.632403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:16.660769    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.660769    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:16.664194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:16.692707    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.692707    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:16.698036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:16.728749    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.728749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:16.728749    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:16.728749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:16.778953    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:16.779017    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:16.841091    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:16.841091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:16.881145    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:16.881145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:16.969295    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.969332    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:16.969362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:19.502757    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:19.529429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:19.557499    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.557499    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:19.561490    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:19.590127    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.590127    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:19.594042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:19.622382    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.622382    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:19.626026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:19.653513    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.653513    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:19.656672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:19.686153    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.686153    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:19.691297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:19.720831    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.720858    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:19.724786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:19.751107    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.751107    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:19.754979    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:19.782999    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.782999    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:19.782999    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:19.782999    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:19.844801    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:19.844801    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:19.884439    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:19.884439    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:19.977224    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:19.977224    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:19.977224    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:20.007404    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:20.007404    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:22.569427    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:22.596121    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:22.628181    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.628181    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:22.632086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:22.660848    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.660848    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:22.664755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:22.694182    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.694261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:22.698085    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:22.726532    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.726600    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:22.730354    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:22.757319    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.757355    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:22.760937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:22.792791    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.792791    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:22.799388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:22.841372    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.841372    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:22.845285    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:22.879377    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.879377    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:22.879377    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:22.879377    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:22.946156    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:22.946156    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:22.990461    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:22.990461    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:23.119453    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:23.119453    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:23.119453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:23.146199    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:23.147241    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:25.703191    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:25.728570    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:25.758884    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.758884    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:25.765071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:25.792957    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.792957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:25.796556    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:25.825466    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.825466    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:25.828728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:25.857451    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.857521    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:25.861306    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:25.887700    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.887700    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:25.891071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:25.920875    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.920875    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:25.924452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:25.952908    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.952952    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:25.956305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:25.987608    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.987608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:25.987608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:25.987608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:26.027162    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:26.027162    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:26.120245    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:26.120245    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:26.120245    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:26.147670    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:26.147697    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:26.198923    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:26.198963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:28.769076    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:28.797716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:28.829859    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.829898    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:28.833257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:28.864507    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.864507    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:28.868407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:28.898827    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.898827    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:28.902971    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:28.933087    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.933087    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:28.937063    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:28.964140    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.964140    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:28.968403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:28.997620    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.997620    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:29.001779    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:29.035745    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.035745    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:29.038757    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:29.068429    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.068429    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:29.068429    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:29.068429    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:29.124688    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:29.124688    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:29.188675    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:29.188675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:29.227887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:29.227887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:29.312828    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:29.312828    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:29.312828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:31.845911    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:31.878797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:31.916523    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.916523    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:31.919583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:31.950914    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.950976    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:31.954687    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:31.983555    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.983580    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:31.987603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:32.021007    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.021007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:32.025190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:32.056980    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.057033    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:32.060500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:32.104780    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.104780    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:32.108815    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:32.135429    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.135494    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:32.138969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:32.171260    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.171260    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:32.171260    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:32.171260    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:32.237752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:32.237752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:32.277887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:32.277887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:32.365810    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:32.365810    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:32.365810    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:32.392252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:32.392252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:34.943627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:34.969529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:35.010672    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.010672    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:35.015462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:35.048036    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.048036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:35.055991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:35.103005    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.103005    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:35.106890    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:35.137906    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.137906    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:35.141530    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:35.172625    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.172625    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:35.176175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:35.209474    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.209474    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:35.213175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:35.244787    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.244787    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:35.248557    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:35.275127    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.275158    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:35.275158    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:35.275158    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:35.334298    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:35.334298    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:35.373969    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:35.373969    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:35.459656    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:35.459755    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:35.459755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:35.489057    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:35.489057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:38.049404    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:38.073507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:38.101267    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.101337    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:38.104951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:38.134276    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.134276    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:38.139127    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:38.166437    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.166437    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:38.170518    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:38.199145    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.199145    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:38.202760    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:38.230466    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.230466    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:38.233640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:38.263867    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.263867    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:38.267542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:38.297791    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.297791    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:38.301874    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:38.332980    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.332980    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:38.332980    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:38.332980    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:38.396086    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:38.396086    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:38.433018    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:38.433018    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:38.516847    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:38.516847    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:38.516847    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:38.545985    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:38.545985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:41.097758    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:41.125607    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:41.156423    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.156423    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:41.159823    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:41.188324    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.188383    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:41.192299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:41.224751    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.224789    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:41.228655    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:41.257790    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.257790    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:41.261606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:41.292935    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.292999    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:41.296487    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:41.322728    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.322728    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:41.326980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:41.355569    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.355569    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:41.359412    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:41.388228    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.388228    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:41.388228    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:41.388228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:41.454094    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:41.454094    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:41.492536    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:41.492536    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:41.584848    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:41.584892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:41.584892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:41.611807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:41.611807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:44.169483    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:44.196254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:44.224412    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.224412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:44.229628    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:44.257724    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.257724    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:44.262355    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:44.289872    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.289926    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:44.293506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:44.321891    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.321891    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:44.325045    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:44.354424    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.354424    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:44.357980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:44.388960    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.388960    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:44.392224    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:44.424484    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.424484    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:44.427710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:44.458834    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.458834    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:44.458834    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:44.458834    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:44.523336    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:44.523336    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:44.560362    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:44.560362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:44.656711    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:44.656711    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:44.656711    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:44.682009    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:44.683010    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.243380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:47.270606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:47.302678    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.302720    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:47.305835    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:47.334169    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.334213    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:47.338162    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:47.370622    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.370693    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:47.374238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:47.406764    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.406787    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:47.410449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:47.439290    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.439332    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:47.442816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:47.475239    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.475239    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:47.479100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:47.510196    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.510196    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:47.513831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:47.543315    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.543378    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:47.543378    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:47.543411    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:47.577600    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:47.577600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.651517    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:47.651517    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:47.717530    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:47.717530    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:47.757989    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:47.757989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:47.848615    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:50.354473    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:50.381662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:50.410303    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.410303    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:50.416210    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:50.443479    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.443479    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:50.447606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:50.475214    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.475214    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:50.479409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:50.508984    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.508984    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:50.513185    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:50.544532    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.544532    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:50.548200    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:50.578350    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.578350    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:50.583137    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:50.615656    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.615656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:50.619983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:50.649117    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.649117    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:50.649117    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:50.649117    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:50.678837    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:50.678837    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:50.730963    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:50.730963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:50.797442    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:50.797442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:50.839051    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:50.840050    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:50.934073    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.440116    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:53.465957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:53.497390    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.497462    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:53.501077    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:53.529488    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.529488    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:53.536331    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:53.563367    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.563367    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:53.566361    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:53.596894    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.596894    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:53.600611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:53.630623    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.630623    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:53.634434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:53.664123    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.664123    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:53.668403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:53.697948    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.697948    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:53.701419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:53.730378    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.730462    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:53.730462    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:53.730462    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:53.798465    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:53.798465    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:53.841124    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:53.841124    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:53.935344    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.936318    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:53.936318    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:53.965040    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:53.965040    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:56.520907    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:56.551718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:56.584506    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.584506    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:56.588065    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:56.618214    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.618214    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:56.622199    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:56.650798    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.650798    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:56.654367    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:56.685409    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.685440    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:56.688781    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:56.719049    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.719163    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:56.722810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:56.753646    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.753646    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:56.757666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:56.793942    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.793942    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:56.798049    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:56.827315    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.827315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:56.827315    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:56.827315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:56.893213    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:56.893213    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:56.931234    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:56.931234    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:57.020142    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:57.020142    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:57.020142    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:57.048871    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:57.048871    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:59.606004    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:59.632524    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:59.662177    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.662177    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:59.666311    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:59.701152    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.701202    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:59.704398    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:59.733278    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.733278    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:59.738174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:59.769038    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.769038    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:59.773266    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:59.814259    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.814259    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:59.818330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:59.848066    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.848066    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:59.851684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:59.880029    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.880029    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:59.884457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:59.914608    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.914608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:59.914608    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:59.914608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:59.978490    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:59.978490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:00.018881    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:00.018881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:00.109744    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:00.109744    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:00.109744    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:00.137522    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:00.137591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:02.693722    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:02.718495    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:10:02.754864    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.754864    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:10:02.758547    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:10:02.795133    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.795231    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:10:02.798914    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:10:02.828115    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.828115    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:10:02.831263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:10:02.864241    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.864241    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:10:02.867861    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:10:02.895555    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.895555    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:10:02.901617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:10:02.931756    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.931756    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:10:02.935718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:10:02.964034    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.964034    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:10:02.968113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:10:03.000080    6576 logs.go:282] 0 containers: []
	W1205 08:10:03.000080    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:10:03.000080    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:03.000080    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:03.092694    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:03.094183    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:03.094183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:03.124625    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:03.124625    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:03.178920    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:10:03.178920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:10:03.237776    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:10:03.237776    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:05.783793    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:05.810874    6576 out.go:203] 
	W1205 08:10:05.812874    6576 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 08:10:05.812874    6576 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 08:10:05.812874    6576 out.go:285] * Related issues:
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 08:10:05.815880    6576 out.go:203] 
	
	
	==> Docker <==
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014561584Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014638592Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014649493Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014654993Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014662094Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014686897Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.014806909Z" level=info msg="Initializing buildkit"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.159292906Z" level=info msg="Completed buildkit initialization"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170523657Z" level=info msg="Daemon has completed initialization"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170725677Z" level=info msg="API listen on [::]:2376"
	Dec 05 08:04:00 newest-cni-042100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170749180Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 dockerd[930]: time="2025-12-05T08:04:00.170751380Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 08:04:00 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:00Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Loaded network plugin cni"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 08:04:01 newest-cni-042100 cri-dockerd[1226]: time="2025-12-05T08:04:01Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 08:04:01 newest-cni-042100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:24.143416   20352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:24.144612   20352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:24.146318   20352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:24.148234   20352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:24.149514   20352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.912373] CPU: 10 PID: 467231 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f59c4559b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f59c4559af6.
	[  +0.000001] RSP: 002b:00007fff7b401a80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.986945] CPU: 6 PID: 467375 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f68553b7b20
	[  +0.000010] Code: Unable to access opcode bytes at RIP 0x7f68553b7af6.
	[  +0.000001] RSP: 002b:00007ffe7761e510 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 08:10:24 up  3:44,  0 user,  load average: 0.90, 2.19, 3.28
	Linux newest-cni-042100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:10:21 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:21 newest-cni-042100 kubelet[20170]: E1205 08:10:21.319416   20170 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:21 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:21 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:21 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 05 08:10:21 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:21 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:22 newest-cni-042100 kubelet[20198]: E1205 08:10:22.044808   20198 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:22 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:22 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:22 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
	Dec 05 08:10:22 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:22 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:22 newest-cni-042100 kubelet[20226]: E1205 08:10:22.801408   20226 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:22 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:22 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:23 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
	Dec 05 08:10:23 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:23 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:23 newest-cni-042100 kubelet[20239]: E1205 08:10:23.581349   20239 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:10:23 newest-cni-042100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:10:23 newest-cni-042100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:10:24 newest-cni-042100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11.
	Dec 05 08:10:24 newest-cni-042100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:10:24 newest-cni-042100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-042100 -n newest-cni-042100: exit status 2 (606.5417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-042100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (13.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (229.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 08:13:29.082963    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:13:47.075112    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:13:55.965230    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:14:04.965681    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:14:06.169001    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:14:23.670742    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:14:33.876242    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:14:36.121890    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:14:36.151555    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:15:22.697587    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:15:59.195128    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:16:29.863924    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:16:31.986724    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:16:34.270206    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1205 08:16:51.658143    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 2 (603.6ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-104100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-104100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (0s)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-104100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-104100
helpers_test.go:243: (dbg) docker inspect no-preload-104100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043",
	        "Created": "2025-12-05T07:47:18.090294673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 414493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:58:06.386924979Z",
	            "FinishedAt": "2025-12-05T07:57:57.665009272Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/hosts",
	        "LogPath": "/var/lib/docker/containers/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043/5f2a793d75730a5297739121a28ea9a307b891491e52109e5095cb565b880043-json.log",
	        "Name": "/no-preload-104100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-104100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-104100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383-init/diff:/var/lib/docker/overlay2/3bda3928d34b7035b9e8988b6d758e0143ff8ec13519311a575667cb4862769d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c750a24cbece6681f11cc89ce27c8566dd1777db16ff8043b7f2af8b60f0c383/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-104100",
	                "Source": "/var/lib/docker/volumes/no-preload-104100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-104100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-104100",
	                "name.minikube.sigs.k8s.io": "no-preload-104100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db4519a857b1cb5f334b0df06abf490ceaca02f8fd29297b385218566b669e33",
	            "SandboxKey": "/var/run/docker/netns/db4519a857b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61566"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61564"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61565"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-104100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "707b5f83051fc4c181f3506b97f5ea358824531428895a55938badd3159b6c9f",
	                    "EndpointID": "4524197e7adfcc8ed0cbc2de51217f52907988f5d42b7f9fdc11804701eaff4d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-104100",
	                        "5f2a793d7573"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 2 (596.5154ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-104100 logs -n 25: (1.7057025s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-218000 sudo systemctl cat docker --no-pager                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p bridge-218000 sudo crio config                                               │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/docker/daemon.json                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo docker system info                                       │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p bridge-218000                                                                │ bridge-218000     │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat cri-docker --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cri-dockerd --version                                    │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status containerd --all --full --no-pager      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat containerd --no-pager                      │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /lib/systemd/system/containerd.service               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo cat /etc/containerd/config.toml                          │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo containerd config dump                                   │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo systemctl status crio --all --full --no-pager            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │                     │
	│ ssh     │ -p kubenet-218000 sudo systemctl cat crio --no-pager                            │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ ssh     │ -p kubenet-218000 sudo crio config                                              │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ delete  │ -p kubenet-218000                                                               │ kubenet-218000    │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:04 UTC │ 05 Dec 25 08:04 UTC │
	│ image   │ newest-cni-042100 image list --format=json                                      │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ pause   │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ unpause │ -p newest-cni-042100 --alsologtostderr -v=1                                     │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ delete  │ -p newest-cni-042100                                                            │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	│ delete  │ -p newest-cni-042100                                                            │ newest-cni-042100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 08:10 UTC │ 05 Dec 25 08:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	W1205 08:03:44.511207    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:46.513793    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	Log file created at: 2025/12/05 08:03:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:03:48.079593    6576 out.go:360] Setting OutFile to fd 1628 ...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.133685    6576 out.go:374] Setting ErrFile to fd 1512...
	I1205 08:03:48.133685    6576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:03:48.149881    6576 out.go:368] Setting JSON to false
	I1205 08:03:48.152825    6576 start.go:133] hostinfo: {"hostname":"minikube4","uptime":13085,"bootTime":1764908742,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 08:03:48.152825    6576 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 08:03:48.159945    6576 out.go:179] * [newest-cni-042100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 08:03:48.164658    6576 notify.go:221] Checking for updates...
	I1205 08:03:48.167308    6576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:03:48.170547    6576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:03:48.173264    6576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 08:03:48.177277    6576 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:03:48.179134    6576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:03:48.182963    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:48.184223    6576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:03:48.306826    6576 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 08:03:48.310816    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.562528    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.540004205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.565521    6576 out.go:179] * Using the docker driver based on existing profile
	I1205 08:03:48.568528    6576 start.go:309] selected driver: docker
	I1205 08:03:48.568528    6576 start.go:927] validating driver "docker" against &{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.568528    6576 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:03:48.621627    6576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:03:48.870676    6576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-05 08:03:48.852383077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 08:03:48.870676    6576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 08:03:48.870676    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:03:48.871676    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:03:48.871676    6576 start.go:353] cluster config:
	{Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:03:48.874674    6576 out.go:179] * Starting "newest-cni-042100" primary control-plane node in "newest-cni-042100" cluster
	I1205 08:03:48.876674    6576 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 08:03:48.879674    6576 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:03:48.881674    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:03:48.881674    6576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 08:03:48.924123    6576 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:48.965045    6576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:03:48.965045    6576 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 08:03:49.173795    6576 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 08:03:49.174041    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 08:03:49.174210    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 08:03:49.174137    6576 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 08:03:49.176070    6576 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:03:49.176070    6576 start.go:360] acquireMachinesLock for newest-cni-042100: {Name:mk64faa8028cd20830a8b7259a71489655fb7207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:49.176610    6576 start.go:364] duration metric: took 539.2µs to acquireMachinesLock for "newest-cni-042100"
	I1205 08:03:49.176876    6576 start.go:96] Skipping create...Using existing machine configuration
	I1205 08:03:49.176954    6576 fix.go:54] fixHost starting: 
	I1205 08:03:49.185185    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:49.467905    6576 fix.go:112] recreateIfNeeded on newest-cni-042100: state=Stopped err=<nil>
	W1205 08:03:49.468085    6576 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 08:03:46.247259    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:48.745542    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	W1205 08:03:50.273234    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:03:48.514113    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:50.532984    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:53.014533    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:49.492567    6576 out.go:252] * Restarting existing docker container for "newest-cni-042100" ...
	I1205 08:03:49.497575    6576 cli_runner.go:164] Run: docker start newest-cni-042100
	I1205 08:03:50.779131    6576 cli_runner.go:217] Completed: docker start newest-cni-042100: (1.2815354s)
	I1205 08:03:50.788112    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:03:51.139299    6576 kic.go:430] container "newest-cni-042100" state is running.
	I1205 08:03:51.164376    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:51.273747    6576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\config.json ...
	I1205 08:03:51.276892    6576 machine.go:94] provisionDockerMachine start ...
	I1205 08:03:51.284394    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:51.396042    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:51.397040    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:51.397040    6576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:03:51.400042    6576 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:03:52.385305    6576 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.385658    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1205 08:03:52.385720    6576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.211458s
	I1205 08:03:52.385800    6576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 08:03:52.435659    6576 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.435659    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1205 08:03:52.435659    6576 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2613971s
	I1205 08:03:52.435659    6576 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1205 08:03:52.467883    6576 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.468216    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1205 08:03:52.468216    6576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.2939732s
	I1205 08:03:52.468216    6576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1205 08:03:52.472465    6576 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.472465    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1205 08:03:52.472465    6576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.2982024s
	I1205 08:03:52.472465    6576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 08:03:52.472991    6576 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.473088    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1205 08:03:52.473088    6576 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2988253s
	I1205 08:03:52.473088    6576 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1205 08:03:52.478918    6576 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.479537    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1205 08:03:52.479537    6576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3052743s
	I1205 08:03:52.479537    6576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1205 08:03:52.488107    6576 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.489284    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1205 08:03:52.489284    6576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.3150206s
	I1205 08:03:52.489284    6576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1205 08:03:52.587256    6576 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:03:52.588098    6576 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1205 08:03:52.588098    6576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.413907s
	I1205 08:03:52.588098    6576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 08:03:52.588098    6576 cache.go:87] Successfully saved all images to host disk.
	W1205 08:03:50.818460    4412 pod_ready.go:104] pod "coredns-66bc5c9577-zrgxp" is not "Ready", error: <nil>
	I1205 08:03:53.244351    4412 pod_ready.go:94] pod "coredns-66bc5c9577-zrgxp" is "Ready"
	I1205 08:03:53.244351    4412 pod_ready.go:86] duration metric: took 21.0105368s for pod "coredns-66bc5c9577-zrgxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.250834    4412 pod_ready.go:83] waiting for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.262503    4412 pod_ready.go:94] pod "etcd-bridge-218000" is "Ready"
	I1205 08:03:53.262503    4412 pod_ready.go:86] duration metric: took 11.6685ms for pod "etcd-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.271087    4412 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.281426    4412 pod_ready.go:94] pod "kube-apiserver-bridge-218000" is "Ready"
	I1205 08:03:53.281426    4412 pod_ready.go:86] duration metric: took 10.3388ms for pod "kube-apiserver-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.286385    4412 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.438718    4412 pod_ready.go:94] pod "kube-controller-manager-bridge-218000" is "Ready"
	I1205 08:03:53.438718    4412 pod_ready.go:86] duration metric: took 152.3311ms for pod "kube-controller-manager-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:53.641268    4412 pod_ready.go:83] waiting for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.039664    4412 pod_ready.go:94] pod "kube-proxy-8r4gs" is "Ready"
	I1205 08:03:54.039664    4412 pod_ready.go:86] duration metric: took 398.3895ms for pod "kube-proxy-8r4gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.241161    4412 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:94] pod "kube-scheduler-bridge-218000" is "Ready"
	I1205 08:03:54.641085    4412 pod_ready.go:86] duration metric: took 399.9175ms for pod "kube-scheduler-bridge-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:03:54.641085    4412 pod_ready.go:40] duration metric: took 32.4419039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:03:54.749081    4412 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:03:54.754768    4412 out.go:179] * Done! kubectl is now configured to use "bridge-218000" cluster and "default" namespace by default
	W1205 08:03:55.516894    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:03:58.012284    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:03:54.578463    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.578463    6576 ubuntu.go:182] provisioning hostname "newest-cni-042100"
	I1205 08:03:54.583153    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.645702    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.646148    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.646193    6576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-042100 && echo "newest-cni-042100" | sudo tee /etc/hostname
	I1205 08:03:54.866524    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-042100
	
	I1205 08:03:54.872867    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:54.933417    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:54.934199    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:54.934272    6576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:03:55.129977    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:55.129977    6576 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1205 08:03:55.129977    6576 ubuntu.go:190] setting up certificates
	I1205 08:03:55.129977    6576 provision.go:84] configureAuth start
	I1205 08:03:55.133735    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:55.190185    6576 provision.go:143] copyHostCerts
	I1205 08:03:55.190185    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1205 08:03:55.190185    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1205 08:03:55.190984    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1205 08:03:55.191986    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1205 08:03:55.191986    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1205 08:03:55.192251    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1205 08:03:55.193178    6576 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1205 08:03:55.193178    6576 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1205 08:03:55.193462    6576 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1205 08:03:55.194234    6576 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-042100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-042100]
	I1205 08:03:55.277216    6576 provision.go:177] copyRemoteCerts
	I1205 08:03:55.282373    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:03:55.285821    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.350220    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:55.476652    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 08:03:55.511250    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 08:03:55.546706    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 08:03:55.583614    6576 provision.go:87] duration metric: took 453.6304ms to configureAuth
	I1205 08:03:55.583614    6576 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:03:55.585275    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:03:55.589206    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.651189    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.652212    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.652246    6576 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 08:03:55.836329    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1205 08:03:55.837449    6576 ubuntu.go:71] root file system type: overlay
	I1205 08:03:55.837646    6576 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 08:03:55.841558    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:55.910453    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:55.911069    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:55.911069    6576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 08:03:56.123635    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 08:03:56.128031    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.191540    6576 main.go:143] libmachine: Using SSH client type: native
	I1205 08:03:56.191765    6576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff64243ea80] 0x7ff6424415e0 <nil>  [] 0s} 127.0.0.1 62708 <nil> <nil>}
	I1205 08:03:56.191765    6576 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 08:03:56.396364    6576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:03:56.396364    6576 machine.go:97] duration metric: took 5.1193899s to provisionDockerMachine
	I1205 08:03:56.396364    6576 start.go:293] postStartSetup for "newest-cni-042100" (driver="docker")
	I1205 08:03:56.396897    6576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:03:56.402233    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:03:56.406223    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.460168    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.609105    6576 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:03:56.617925    6576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:03:56.617925    6576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1205 08:03:56.617925    6576 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1205 08:03:56.618732    6576 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem -> 80362.pem in /etc/ssl/certs
	I1205 08:03:56.623542    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:03:56.637899    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /etc/ssl/certs/80362.pem (1708 bytes)
	I1205 08:03:56.671787    6576 start.go:296] duration metric: took 274.8468ms for postStartSetup
	I1205 08:03:56.675921    6576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:03:56.678948    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.735289    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:56.884826    6576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:03:56.893835    6576 fix.go:56] duration metric: took 7.7168367s for fixHost
	I1205 08:03:56.893835    6576 start.go:83] releasing machines lock for "newest-cni-042100", held for 7.7169474s
	I1205 08:03:56.896826    6576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042100
	I1205 08:03:56.959384    6576 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1205 08:03:56.965413    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:56.966255    6576 ssh_runner.go:195] Run: cat /version.json
	I1205 08:03:56.973872    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:03:57.022198    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:03:57.026201    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	W1205 08:03:57.148711    6576 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1205 08:03:57.162212    6576 ssh_runner.go:195] Run: systemctl --version
	I1205 08:03:57.181097    6576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:03:57.193288    6576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:03:57.197753    6576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:03:57.214357    6576 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 08:03:57.214357    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.214357    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.214357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:57.242461    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:03:57.262818    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1205 08:03:57.264705    6576 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1205 08:03:57.264749    6576 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1205 08:03:57.282712    6576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:03:57.286891    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:03:57.310466    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.333091    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:03:57.356105    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:03:57.377603    6576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:03:57.401090    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:03:57.423330    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:03:57.445407    6576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:03:57.472206    6576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:03:57.488210    6576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:03:57.505210    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:57.657790    6576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:03:57.802417    6576 start.go:496] detecting cgroup driver to use...
	I1205 08:03:57.802417    6576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:03:57.807146    6576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 08:03:57.832467    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.857712    6576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 08:03:57.930272    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 08:03:57.960276    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:03:57.984286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:03:58.017277    6576 ssh_runner.go:195] Run: which cri-dockerd
	I1205 08:03:58.032288    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 08:03:58.048281    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1205 08:03:58.077282    6576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 08:03:58.275290    6576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 08:03:58.457293    6576 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 08:03:58.457293    6576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 08:03:58.486286    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1205 08:03:58.509287    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:03:58.648318    6576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 08:04:00.173930    6576 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5255881s)
	I1205 08:04:00.177929    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:04:00.201541    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 08:04:00.228851    6576 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 08:04:00.259044    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:00.283032    6576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 08:04:00.429299    6576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 08:04:00.593446    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.738544    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 08:04:00.766865    6576 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1205 08:04:00.791407    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:00.930315    6576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 08:04:01.041317    6576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 08:04:01.059628    6576 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 08:04:01.064630    6576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 08:04:01.072635    6576 start.go:564] Will wait 60s for crictl version
	I1205 08:04:01.076636    6576 ssh_runner.go:195] Run: which crictl
	I1205 08:04:01.090615    6576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:04:01.132099    6576 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.0.4
	RuntimeApiVersion:  v1
	I1205 08:04:01.136068    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.182106    6576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 08:04:01.227459    6576 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.0.4 ...
	I1205 08:04:01.231071    6576 cli_runner.go:164] Run: docker exec -t newest-cni-042100 dig +short host.docker.internal
	I1205 08:04:01.375969    6576 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1205 08:04:01.379962    6576 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1205 08:04:01.387350    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.408320    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:01.468320    6576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1205 08:04:00.335905    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	W1205 08:04:00.512126    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	W1205 08:04:03.018493    7752 pod_ready.go:104] pod "coredns-66bc5c9577-gsfxl" is not "Ready", error: <nil>
	I1205 08:04:01.471323    6576 kubeadm.go:884] updating cluster {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:04:01.471323    6576 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 08:04:01.475324    6576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 08:04:01.511342    6576 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 08:04:01.512362    6576 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:04:01.512362    6576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1205 08:04:01.512362    6576 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-042100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 08:04:01.515327    6576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 08:04:01.600646    6576 cni.go:84] Creating CNI manager for ""
	I1205 08:04:01.600646    6576 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 08:04:01.600646    6576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 08:04:01.600646    6576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042100 NodeName:newest-cni-042100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:04:01.600646    6576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-042100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:04:01.604645    6576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 08:04:01.617663    6576 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:04:01.621646    6576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:04:01.634708    6576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1205 08:04:01.659457    6576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 08:04:01.681516    6576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1205 08:04:01.709549    6576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:04:01.717165    6576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:04:01.737936    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:01.886462    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:01.908845    6576 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100 for IP: 192.168.76.2
	I1205 08:04:01.908845    6576 certs.go:195] generating shared ca certs ...
	I1205 08:04:01.908845    6576 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:01.910250    6576 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1205 08:04:01.910428    6576 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1205 08:04:01.910428    6576 certs.go:257] generating profile certs ...
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\client.key
	I1205 08:04:01.911122    6576 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key.d01368e3
	I1205 08:04:01.911645    6576 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key
	I1205 08:04:01.912393    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem (1338 bytes)
	W1205 08:04:01.912708    6576 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036_empty.pem, impossibly tiny 0 bytes
	I1205 08:04:01.912818    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1205 08:04:01.913109    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1205 08:04:01.913766    6576 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem (1708 bytes)
	I1205 08:04:01.914884    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:04:01.946745    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:04:01.978670    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:04:02.020771    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 08:04:02.052789    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 08:04:02.083785    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:04:02.111686    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:04:02.138106    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-042100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:04:02.167957    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\8036.pem --> /usr/share/ca-certificates/8036.pem (1338 bytes)
	I1205 08:04:02.197699    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\80362.pem --> /usr/share/ca-certificates/80362.pem (1708 bytes)
	I1205 08:04:02.228974    6576 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:04:02.258542    6576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:04:02.283541    6576 ssh_runner.go:195] Run: openssl version
	I1205 08:04:02.296537    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.312534    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/80362.pem /etc/ssl/certs/80362.pem
	I1205 08:04:02.327543    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.334539    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:26 /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.339544    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/80362.pem
	I1205 08:04:02.392223    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:04:02.408977    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.424981    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:04:02.439981    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.446982    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:07 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.451985    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:04:02.500175    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:04:02.518368    6576 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.537597    6576 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8036.pem /etc/ssl/certs/8036.pem
	I1205 08:04:02.555653    6576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.562656    6576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:26 /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.566659    6576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8036.pem
	I1205 08:04:02.617005    6576 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:04:02.635329    6576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:04:02.649383    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 08:04:02.697863    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 08:04:02.747535    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 08:04:02.802236    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 08:04:02.853222    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 08:04:02.901642    6576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 08:04:02.946962    6576 kubeadm.go:401] StartCluster: {Name:newest-cni-042100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-042100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:04:02.951256    6576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 08:04:02.986478    6576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:04:02.999955    6576 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 08:04:02.999955    6576 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 08:04:03.003999    6576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 08:04:03.019291    6576 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 08:04:03.022819    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.083372    6576 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.084185    6576 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042100" cluster setting kubeconfig missing "newest-cni-042100" context setting]
	I1205 08:04:03.084741    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.109144    6576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 08:04:03.128232    6576 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1205 08:04:03.138905    6576 kubeadm.go:602] duration metric: took 138.9481ms to restartPrimaryControlPlane
	I1205 08:04:03.138905    6576 kubeadm.go:403] duration metric: took 191.9404ms to StartCluster
	I1205 08:04:03.138905    6576 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.138905    6576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 08:04:03.141698    6576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:04:03.142419    6576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 08:04:03.142419    6576 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:04:03.142419    6576 config.go:182] Loaded profile config "newest-cni-042100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 08:04:03.163290    6576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting dashboard=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042100"
	I1205 08:04:03.163290    6576 addons.go:239] Setting addon dashboard=true in "newest-cni-042100"
	W1205 08:04:03.163290    6576 addons.go:248] addon dashboard should already be in state true
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.163290    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.173405    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.192363    6576 out.go:179] * Verifying Kubernetes components...
	I1205 08:04:03.249622    6576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-042100"
	I1205 08:04:03.250609    6576 host.go:66] Checking if "newest-cni-042100" exists ...
	I1205 08:04:03.257607    6576 cli_runner.go:164] Run: docker container inspect newest-cni-042100 --format={{.State.Status}}
	I1205 08:04:03.258609    6576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:04:03.261608    6576 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 08:04:03.264610    6576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:04:03.309607    6576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.309607    6576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:04:03.312609    6576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.312609    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:04:03.312609    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.315610    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.318607    6576 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 08:04:03.510751    7752 pod_ready.go:94] pod "coredns-66bc5c9577-gsfxl" is "Ready"
	I1205 08:04:03.510751    7752 pod_ready.go:86] duration metric: took 25.5102081s for pod "coredns-66bc5c9577-gsfxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.517746    7752 pod_ready.go:83] waiting for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.529764    7752 pod_ready.go:94] pod "etcd-kubenet-218000" is "Ready"
	I1205 08:04:03.529764    7752 pod_ready.go:86] duration metric: took 12.0185ms for pod "etcd-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.535749    7752 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.544756    7752 pod_ready.go:94] pod "kube-apiserver-kubenet-218000" is "Ready"
	I1205 08:04:03.544756    7752 pod_ready.go:86] duration metric: took 9.007ms for pod "kube-apiserver-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.549745    7752 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.706418    7752 pod_ready.go:94] pod "kube-controller-manager-kubenet-218000" is "Ready"
	I1205 08:04:03.706418    7752 pod_ready.go:86] duration metric: took 156.6708ms for pod "kube-controller-manager-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:03.906896    7752 pod_ready.go:83] waiting for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.305526    7752 pod_ready.go:94] pod "kube-proxy-l9mnz" is "Ready"
	I1205 08:04:04.305526    7752 pod_ready.go:86] duration metric: took 398.0934ms for pod "kube-proxy-l9mnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.506453    7752 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:94] pod "kube-scheduler-kubenet-218000" is "Ready"
	I1205 08:04:04.908413    7752 pod_ready.go:86] duration metric: took 401.8894ms for pod "kube-scheduler-kubenet-218000" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:04:04.908413    7752 pod_ready.go:40] duration metric: took 37.4190345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:04:05.004707    7752 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 08:04:05.007705    7752 out.go:179] * Done! kubectl is now configured to use "kubenet-218000" cluster and "default" namespace by default
	I1205 08:04:03.344609    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 08:04:03.344609    6576 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 08:04:03.353008    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.373762    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.389748    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.415749    6576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62708 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-042100\id_rsa Username:docker}
	I1205 08:04:03.454747    6576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:04:03.481745    6576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-042100
	I1205 08:04:03.544756    6576 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:04:03.550761    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:03.552751    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:03.556766    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 08:04:03.556766    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 08:04:03.561743    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:03.627813    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 08:04:03.627923    6576 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 08:04:03.654463    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 08:04:03.654463    6576 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 08:04:03.731575    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 08:04:03.731654    6576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1205 08:04:03.751356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.751356    6576 retry.go:31] will retry after 148.467646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.754346    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	W1205 08:04:03.755354    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.755354    6576 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 08:04:03.755354    6576 retry.go:31] will retry after 202.130528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.774491    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 08:04:03.774491    6576 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 08:04:03.793803    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 08:04:03.793803    6576 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 08:04:03.828295    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 08:04:03.828351    6576 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 08:04:03.851355    6576 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.851355    6576 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 08:04:03.876402    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:03.905217    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:03.957742    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.957742    6576 retry.go:31] will retry after 291.655688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.962256    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:03.992521    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:03.992521    6576 retry.go:31] will retry after 561.792628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.049441    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:04.057481    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.057556    6576 retry.go:31] will retry after 288.112081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.254701    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.343216    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.343216    6576 retry.go:31] will retry after 359.979776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.350062    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:04.431174    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.431174    6576 retry.go:31] will retry after 483.679942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.549772    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:04.559147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:04.642871    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.642871    6576 retry.go:31] will retry after 528.970083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.708123    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:04.787283    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.787283    6576 retry.go:31] will retry after 459.684582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:04.919229    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.004707    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.004707    6576 retry.go:31] will retry after 831.823948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.050298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.177969    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:05.252148    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:05.268807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.268914    6576 retry.go:31] will retry after 1.219301827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:05.381615    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.381684    6576 retry.go:31] will retry after 1.003502336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.548840    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:05.841493    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:05.945714    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:05.945714    6576 retry.go:31] will retry after 1.344373684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.051495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:06.390219    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:06.476859    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.476859    6576 retry.go:31] will retry after 916.677354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.493513    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:06.550586    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:06.586142    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:06.586142    6576 retry.go:31] will retry after 814.667109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.049968    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:07.295279    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:07.385161    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.385225    6576 retry.go:31] will retry after 2.309719888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.397737    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:07.404241    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.24760459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:07.487310    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.487310    6576 retry.go:31] will retry after 2.229405263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:07.550637    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:08.050329    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:10.375252    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): Get "https://127.0.0.1:61565/api/v1/nodes/no-preload-104100": EOF
	I1205 08:04:08.551330    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.052416    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.549628    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:09.699045    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:09.722067    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:09.740066    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:09.854063    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.854063    6576 retry.go:31] will retry after 1.718952919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.926061    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.926061    6576 retry.go:31] will retry after 2.401961347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:04:09.960056    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:09.961057    6576 retry.go:31] will retry after 3.751594778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:10.049061    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:10.549298    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.049797    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.550139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:11.577133    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:11.663155    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:11.663155    6576 retry.go:31] will retry after 4.120114825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.049572    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:12.333014    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:12.419653    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.419653    6576 retry.go:31] will retry after 2.740389125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:12.549673    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.050128    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.549901    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:13.717839    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:13.806807    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:13.806807    6576 retry.go:31] will retry after 4.752661147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:14.050521    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:14.551720    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.050682    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.165926    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:15.256271    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.256271    6576 retry.go:31] will retry after 4.534312748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.549805    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:15.787818    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:15.865098    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:15.865628    6576 retry.go:31] will retry after 5.383695211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:16.050434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:16.549442    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.049923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:17.550083    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.049667    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:19.104488    4560 node_ready.go:55] error getting node "no-preload-104100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1205 08:04:19.104793    4560 node_ready.go:38] duration metric: took 6m0.001013s for node "no-preload-104100" to be "Ready" ...
	I1205 08:04:19.107356    4560 out.go:203] 
	W1205 08:04:19.110511    4560 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 08:04:19.110554    4560 out.go:285] * 
	W1205 08:04:19.112383    4560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 08:04:19.116573    4560 out.go:203] 
	I1205 08:04:18.551343    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:18.565349    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:18.647263    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:18.647263    6576 retry.go:31] will retry after 8.382323881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.050424    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:19.796280    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:04:19.904265    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:19.904265    6576 retry.go:31] will retry after 5.117792571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:20.052293    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:20.550380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.052677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:21.255736    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:21.356356    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.356356    6576 retry.go:31] will retry after 8.875197166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:21.550333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.049310    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:22.550338    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.050244    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:23.551039    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.050874    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:24.550399    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:25.027043    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:25.050989    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:25.159593    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.159593    6576 retry.go:31] will retry after 7.802785807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:25.553440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.050359    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:26.551986    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:27.034606    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:04:27.050924    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:27.141503    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.141551    6576 retry.go:31] will retry after 13.674183061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:27.553694    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.049210    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:28.550842    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.051091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:29.549571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.051474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:30.237147    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:04:30.345143    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.345143    6576 retry.go:31] will retry after 18.684554823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:30.552505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.050974    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:31.550315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.053025    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.550841    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:32.967139    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:33.050008    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:33.074001    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.074001    6576 retry.go:31] will retry after 21.457353412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:33.550375    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.053598    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:34.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.050034    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:35.550853    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.050947    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:36.552933    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.049827    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:37.551205    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.050234    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:38.552156    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.050748    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:39.549737    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.050549    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.550949    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:40.819283    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:40.946292    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:40.946292    6576 retry.go:31] will retry after 18.180546633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:41.051295    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:41.551923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.051010    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:42.550802    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.050090    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:43.549595    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.050323    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:44.551060    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.050284    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:45.549318    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.049045    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:46.550390    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.050869    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:47.549920    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.050040    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:48.550378    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:49.037573    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:04:49.050392    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:49.132808    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.132808    6576 retry.go:31] will retry after 12.282235903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:49.549952    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.052465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:50.550412    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.053026    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:51.551123    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.050959    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:52.550243    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.051085    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:53.550766    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.053585    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:54.537931    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 08:04:54.551106    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 08:04:54.662326    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:54.662326    6576 retry.go:31] will retry after 25.982171867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:55.050927    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:55.551197    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.049847    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:56.551717    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.050571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:57.552306    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.050495    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:58.550960    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.050091    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:04:59.133373    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:04:59.223117    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.223117    6576 retry.go:31] will retry after 23.551015037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:04:59.551231    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.047738    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:00.550465    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.051875    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:01.420389    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:01.505728    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.505728    6576 retry.go:31] will retry after 17.206812229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:01.551821    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.051028    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:02.550994    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.051369    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:03.550326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:03.585938    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.585938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:03.590134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:03.617879    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.617879    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:03.624332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:03.651940    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.651940    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:03.656120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:03.685733    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.685733    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:03.690030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:03.719658    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.719713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:03.723576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:03.755797    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.755797    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:03.760966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:03.789461    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.789461    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:03.793178    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:03.823147    6576 logs.go:282] 0 containers: []
	W1205 08:05:03.823147    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:03.823147    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:03.823679    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:03.890829    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:03.890829    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:03.937573    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:03.937573    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:04.028268    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:04.019442    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.020583    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.021549    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.022516    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:04.023490    3427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:04.028268    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:04.028268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:04.054265    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:04.054265    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.624597    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:06.650113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:06.681568    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.682088    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:06.685527    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:06.715181    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.715181    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:06.718768    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:06.748649    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.748692    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:06.752313    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:06.783519    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.783582    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:06.787257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:06.817858    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.817858    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:06.821703    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:06.854241    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.854241    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:06.857773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:06.888901    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.888901    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:06.894071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:06.923675    6576 logs.go:282] 0 containers: []
	W1205 08:05:06.923675    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:06.923675    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:06.923675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:06.974113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:06.974166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:07.037689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:07.037689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:07.080588    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:07.080588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:07.171034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:07.161485    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.162459    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.163483    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.164627    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:07.165768    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:07.171067    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:07.171067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:09.706054    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:09.732108    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:09.767273    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.767300    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:09.770837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:09.802479    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.802550    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:09.806320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:09.835537    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.835537    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:09.841566    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:09.874578    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.874578    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:09.878148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:09.906942    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.907017    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:09.910154    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:09.941197    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.941197    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:09.945133    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:09.974591    6576 logs.go:282] 0 containers: []
	W1205 08:05:09.974591    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:09.978698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:10.007749    6576 logs.go:282] 0 containers: []
	W1205 08:05:10.007749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:10.007749    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:10.007749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:10.044236    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:10.044236    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:10.130995    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:10.121696    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.122898    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.123892    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.124975    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:10.125947    3753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:10.130995    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:10.130995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:10.158359    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:10.158945    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:10.209053    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:10.209053    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:12.782787    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:12.809043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:12.839958    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.839958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:12.845180    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:12.876657    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.876720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:12.880739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:12.908227    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.908227    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:12.912011    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:12.942400    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.942449    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:12.945431    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:12.973155    6576 logs.go:282] 0 containers: []
	W1205 08:05:12.973155    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:12.976739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:13.004259    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.004259    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:13.008151    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:13.038225    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.038225    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:13.041692    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:13.070500    6576 logs.go:282] 0 containers: []
	W1205 08:05:13.070500    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:13.070500    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:13.070500    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:13.134608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:13.134608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:13.173994    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:13.173994    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:13.270602    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:13.260198    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.261222    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.262157    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.263450    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:13.264369    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:13.270665    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:13.270665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:13.299297    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:13.299297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:15.870600    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:15.895506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:15.927013    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.927013    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:15.930717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:15.959875    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.959941    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:15.963955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:15.992862    6576 logs.go:282] 0 containers: []
	W1205 08:05:15.992862    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:15.996303    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:16.023966    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.023966    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:16.027786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:16.058698    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.058698    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:16.065246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:16.094826    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.094826    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:16.098650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:16.144774    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.144820    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:16.148422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:16.177296    6576 logs.go:282] 0 containers: []
	W1205 08:05:16.177296    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:16.177296    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:16.177296    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:16.242225    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:16.242225    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:16.283778    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:16.283778    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:16.378623    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:16.368649    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.369764    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.370846    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.372936    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:16.374055    4094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:16.378623    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:16.378623    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:16.408736    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:16.409256    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:18.719251    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 08:05:18.815541    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:18.815541    6576 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:18.959261    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:18.983847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:19.016048    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.016048    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:19.022913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:19.054693    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.054752    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:19.058555    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:19.087342    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.087342    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:19.090772    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:19.118199    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.118199    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:19.121567    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:19.151346    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.151346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:19.155305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:19.186521    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.186611    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:19.190219    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:19.220730    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.220730    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:19.225064    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:19.255890    6576 logs.go:282] 0 containers: []
	W1205 08:05:19.256013    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:19.256013    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:19.256013    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:19.324476    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:19.324476    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:19.362802    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:19.362802    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:19.443537    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:19.435220    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.436589    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.437697    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.439019    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:19.440328    4268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:19.444546    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:19.444546    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:19.474585    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:19.474647    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:20.651307    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:20.735190    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:20.735294    6576 retry.go:31] will retry after 27.405422909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.034778    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:22.060808    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:22.093037    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.093111    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:22.097193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:22.124988    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.125036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:22.128496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:22.157896    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.157947    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:22.161826    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:22.190808    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.190839    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:22.194900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:22.227226    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.227346    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:22.230966    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:22.260811    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.260861    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:22.264784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:22.295222    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.295331    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:22.302135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:22.343045    6576 logs.go:282] 0 containers: []
	W1205 08:05:22.343116    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:22.343116    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:22.343116    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:22.394026    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:22.394026    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:22.457078    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:22.457078    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:22.498385    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:22.498434    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:22.581112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:22.571774    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.572814    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574067    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.574928    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:22.577446    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:22.581112    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:22.581112    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:22.780060    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 08:05:22.859804    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:22.859804    6576 retry.go:31] will retry after 21.036491608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 08:05:25.113006    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:25.148820    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:25.186604    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.186604    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:25.191401    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:25.223786    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.223867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:25.227359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:25.262253    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.262310    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:25.266030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:25.298397    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.298433    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:25.303771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:25.334112    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.334112    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:25.338565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:25.370125    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.370206    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:25.374513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:25.406130    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.406219    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:25.410417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:25.442663    6576 logs.go:282] 0 containers: []
	W1205 08:05:25.442742    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:25.442742    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:25.442742    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:25.479786    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:25.479786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:25.573308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:25.562787    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.563766    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.565621    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.567187    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:25.568377    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:25.573308    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:25.573308    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:25.599667    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:25.599667    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:25.650617    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:25.650617    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.218354    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:28.243705    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:28.279022    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.279022    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:28.283525    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:28.313798    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.313798    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:28.318172    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:28.347700    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.347700    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:28.351701    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:28.381257    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.381341    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:28.384917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:28.416041    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.416041    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:28.419541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:28.447349    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.447349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:28.451684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:28.479275    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.479307    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:28.483095    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:28.511115    6576 logs.go:282] 0 containers: []
	W1205 08:05:28.511187    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:28.511187    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:28.511237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:28.574706    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:28.574706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:28.615541    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:28.615541    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:28.709604    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:28.698183    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.699114    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.700360    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.702870    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:28.703910    4778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:28.709604    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:28.709604    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:28.738815    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:28.738815    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.300476    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:31.328202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:31.357921    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.357958    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:31.361905    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:31.390844    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.390926    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:31.395488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:31.426488    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.426570    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:31.430048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:31.461632    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.461687    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:31.465105    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:31.492594    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.492657    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:31.496042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:31.523806    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.523834    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:31.527758    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:31.557959    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.558020    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:31.561776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:31.588451    6576 logs.go:282] 0 containers: []
	W1205 08:05:31.588485    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:31.588513    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:31.588535    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:31.675984    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:31.663813    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.664690    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.666725    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.667569    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:31.669348    4931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:31.675984    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:31.675984    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:31.706483    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:31.706567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:31.753154    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:31.753677    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:31.813379    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:31.813379    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.359731    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:34.386737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:34.416273    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.416306    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:34.419220    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:34.452145    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.452661    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:34.456139    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:34.486541    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.486593    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:34.489738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:34.520642    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.520642    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:34.524007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:34.556848    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.556848    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:34.560551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:34.589976    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.589976    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:34.594061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:34.623871    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.623871    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:34.627661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:34.655428    6576 logs.go:282] 0 containers: []
	W1205 08:05:34.655428    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:34.655428    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:34.655428    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:34.693248    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:34.693248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:34.782095    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:34.769118    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.770129    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.774903    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.775762    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:34.777785    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:34.782095    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:34.782095    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:34.809243    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:34.809243    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:34.859486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:34.859486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.427533    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:37.454695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:37.485702    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.485702    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:37.489329    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:37.522074    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.522074    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:37.525283    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:37.555534    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.555534    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:37.559473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:37.589923    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.589923    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:37.593340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:37.625230    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.625230    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:37.628764    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:37.658722    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.658722    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:37.661870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:37.693003    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.693003    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:37.696992    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:37.726216    6576 logs.go:282] 0 containers: []
	W1205 08:05:37.726286    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:37.726286    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:37.726333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:37.791305    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:37.791305    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:37.829600    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:37.829600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:37.920892    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:37.910351    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.911392    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.912203    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.914890    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:37.916466    5259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:37.920892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:37.920892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:37.947989    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:37.947989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:40.501988    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:40.527784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:40.563590    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.563590    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:40.567375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:40.598332    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.598332    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:40.602019    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:40.629289    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.629289    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:40.633378    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:40.660574    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.660630    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:40.664275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:40.691063    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.691063    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:40.694694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:40.723611    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.723667    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:40.726975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:40.755155    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.755155    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:40.759134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:40.793723    6576 logs.go:282] 0 containers: []
	W1205 08:05:40.793723    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:40.793723    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:40.793723    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:40.831198    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:40.831198    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:40.925587    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:40.914619    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.915635    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.918057    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.919839    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:40.921449    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:40.925587    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:40.925587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:40.954081    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:40.954114    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:41.007048    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:41.007096    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:43.582160    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:43.607539    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:43.638277    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.638277    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:43.642375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:43.675099    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.675099    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:43.678089    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:43.706803    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.706803    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:43.713114    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:43.740522    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.740522    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:43.744411    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:43.773724    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.773780    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:43.777763    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:43.803962    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.803962    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:43.807698    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:43.839559    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.839559    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:43.843918    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:43.876174    6576 logs.go:282] 0 containers: []
	W1205 08:05:43.876252    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:43.876252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:43.876252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:43.902671    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:05:43.934973    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:43.934973    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 08:05:43.999146    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:43.999146    6576 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:44.032735    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:44.033740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:44.075384    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:44.075384    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:44.157223    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:44.148191    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.149294    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.151729    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.152742    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:44.154287    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:44.157223    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:44.157223    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:46.691333    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:46.717072    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:46.748595    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.748595    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:46.752218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:46.780374    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.780374    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:46.783922    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:46.815066    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.815066    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:46.818942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:46.847510    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.847563    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:46.851012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:46.883362    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.883465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:46.886941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:46.916379    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.916451    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:46.920641    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:46.949114    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.949114    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:46.953549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:46.983164    6576 logs.go:282] 0 containers: []
	W1205 08:05:46.983164    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:46.983164    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:46.983164    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:47.022255    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:47.022255    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:47.111784    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:47.103723    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.104904    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.105980    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.106921    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:47.108068    5763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:47.111860    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:47.111860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:47.138559    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:47.138559    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:47.188823    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:47.189346    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:48.147422    6576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 08:05:48.239875    6576 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 08:05:48.239875    6576 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 08:05:48.242898    6576 out.go:179] * Enabled addons: 
	I1205 08:05:48.245836    6576 addons.go:530] duration metric: took 1m45.1017438s for enable addons: enabled=[]
	I1205 08:05:49.757493    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:49.785573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:49.818757    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.818757    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:49.822359    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:49.849919    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.849919    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:49.853892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:49.881451    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.881451    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:49.884508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:49.916549    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.916599    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:49.922025    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:49.955857    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.955857    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:49.959871    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:49.992747    6576 logs.go:282] 0 containers: []
	W1205 08:05:49.992747    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:49.997745    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:50.027985    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.027985    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:50.032696    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:50.066315    6576 logs.go:282] 0 containers: []
	W1205 08:05:50.066315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:50.066315    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:50.066315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:50.162764    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:50.153626    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.154703    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.155668    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.156722    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:50.157515    5935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:50.162764    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:50.162764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:50.190807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:50.190807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:50.244357    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:50.244357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:50.306832    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:50.306832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:52.850828    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:52.881404    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:52.914164    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.914164    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:52.919056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:52.946339    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.946339    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:52.950249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:52.977159    6576 logs.go:282] 0 containers: []
	W1205 08:05:52.977159    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:52.981587    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:53.011126    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.011126    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:53.016170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:53.050900    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.050900    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:53.055929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:53.086492    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.086492    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:53.091422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:53.123587    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.123587    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:53.126586    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:53.155525    6576 logs.go:282] 0 containers: []
	W1205 08:05:53.155525    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:53.155525    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:53.155525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:53.220198    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:53.221197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:53.261683    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:53.261683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:53.355432    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:53.347461    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.348650    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.349774    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.350595    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:53.352462    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:53.355432    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:53.355432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:53.386521    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:53.386521    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:55.947613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:55.973795    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:56.007916    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.007916    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:56.011792    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:56.045094    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.045094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:56.048513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:56.082501    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.082501    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:56.086603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:56.116918    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.117005    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:56.120916    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:56.150716    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.150716    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:56.154101    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:56.186882    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.186882    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:56.190500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:56.223741    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.223741    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:56.227290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:56.255902    6576 logs.go:282] 0 containers: []
	W1205 08:05:56.255902    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:56.255902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:56.255902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:05:56.285180    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:56.285180    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:56.333650    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:56.333650    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:56.393332    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:56.393332    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:56.432841    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:56.432841    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:56.521419    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:56.509800    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.510486    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.512803    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.513515    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:56.516078    6279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.025923    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:05:59.056473    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:05:59.091893    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.091909    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:05:59.095650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:05:59.128079    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.128185    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:05:59.131611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:05:59.159655    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.159655    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:05:59.163348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:05:59.192422    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.192422    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:05:59.196339    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:05:59.226737    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.226737    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:05:59.230776    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:05:59.258194    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.258194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:05:59.261784    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:05:59.292592    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.292592    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:05:59.296370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:05:59.323764    6576 logs.go:282] 0 containers: []
	W1205 08:05:59.323764    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:05:59.323764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:05:59.323764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:05:59.375689    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:05:59.376207    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:05:59.440586    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:05:59.440586    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:05:59.479856    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:05:59.479856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:05:59.578161    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:05:59.565061    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.568353    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.570201    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.571693    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:05:59.572802    6438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:05:59.578161    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:05:59.578161    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.111153    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:02.137611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:02.172231    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.172231    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:02.176271    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:02.208274    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.208274    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:02.211990    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:02.244184    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.244245    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:02.247661    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:02.278388    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.278388    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:02.282228    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:02.312290    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.312290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:02.316470    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:02.345487    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.345487    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:02.349444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:02.378305    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.378305    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:02.381923    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:02.409737    6576 logs.go:282] 0 containers: []
	W1205 08:06:02.409737    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:02.409737    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:02.409737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:02.477029    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:02.477029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:02.517422    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:02.517422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:02.605249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:02.593783    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.594894    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.595810    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.599388    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:02.600426    6587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:02.605249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:02.605249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:02.632767    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:02.632828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:05.196182    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:05.221488    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:05.251281    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.251355    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:05.254854    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:05.284103    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.284103    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:05.288076    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:05.315552    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.315552    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:05.319409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:05.347664    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.347664    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:05.351387    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:05.382685    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.382685    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:05.386801    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:05.416816    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.416816    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:05.421471    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:05.451265    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.451350    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:05.455129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:05.486455    6576 logs.go:282] 0 containers: []
	W1205 08:06:05.486455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:05.486455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:05.486455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:05.548252    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:05.548252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:05.586103    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:05.586103    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:05.689902    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:05.677448    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.678605    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.679150    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.681481    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:05.682296    6749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:05.689902    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:05.689902    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:05.715463    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:05.715463    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:08.298546    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:08.325694    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:08.358357    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.358427    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:08.362535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:08.393631    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.393631    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:08.397365    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:08.429162    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.429162    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:08.433444    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:08.464672    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.464672    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:08.467810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:08.496450    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.496450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:08.499640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:08.526246    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.526246    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:08.530507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:08.558130    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.558130    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:08.561856    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:08.590753    6576 logs.go:282] 0 containers: []
	W1205 08:06:08.590753    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:08.590753    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:08.590753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:08.656049    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:08.656049    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:08.697268    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:08.697268    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:08.794510    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:08.781524    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.783127    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.784980    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.787090    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:08.789080    6922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:08.794510    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:08.794510    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:08.839662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:08.839734    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:11.394677    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:11.423727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:11.453346    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.453346    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:11.460955    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:11.498834    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.498834    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:11.498834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:11.532657    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.532657    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:11.540987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:11.575759    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.575786    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:11.579561    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:11.612047    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.612102    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:11.615579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:11.644318    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.644370    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:11.648326    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:11.678026    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.678026    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:11.681899    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:11.711631    6576 logs.go:282] 0 containers: []
	W1205 08:06:11.711631    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:11.711631    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:11.711631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:11.772905    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:11.772905    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:11.814639    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:11.814639    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:11.905607    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:11.894108    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.894923    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.897880    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.898810    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:11.901603    7102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:11.905657    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:11.905700    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:11.934717    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:11.935238    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:14.488836    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:14.512857    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:14.546571    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.546571    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:14.549903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:14.580887    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.580887    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:14.584967    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:14.630312    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.630312    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:14.633809    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:14.667373    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.667373    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:14.671026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:14.699813    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.699813    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:14.703177    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:14.734619    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.734619    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:14.739056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:14.769129    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.769129    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:14.773030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:14.803689    6576 logs.go:282] 0 containers: []
	W1205 08:06:14.803689    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:14.803689    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:14.803689    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:14.841923    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:14.841923    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:14.932570    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:14.922654    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.923694    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.924737    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.926216    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:14.927697    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:14.932570    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:14.932570    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:14.961067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:14.961591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:15.010912    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:15.010953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:17.575458    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:17.603741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:17.636367    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.636367    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:17.640529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:17.668380    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.668380    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:17.672111    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:17.700544    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.700544    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:17.704634    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:17.736823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.736823    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:17.741002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:17.770125    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.770125    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:17.775816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:17.812823    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.812823    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:17.815683    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:17.844895    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.844895    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:17.849115    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:17.880706    6576 logs.go:282] 0 containers: []
	W1205 08:06:17.880706    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:17.880706    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:17.880706    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:17.969171    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:17.958966    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.959876    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.961650    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.962479    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:17.965271    7418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:17.969171    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:17.969263    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:17.995396    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:17.995396    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:18.044466    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:18.044466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:18.105721    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:18.105721    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:20.651671    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:20.679273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:20.707727    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.707727    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:20.711373    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:20.741891    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.741891    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:20.746073    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:20.777260    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.777260    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:20.780520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:20.816982    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.816982    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:20.820520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:20.850461    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.850461    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:20.854205    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:20.882429    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.882429    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:20.886920    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:20.914179    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.914179    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:20.917831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:20.949708    6576 logs.go:282] 0 containers: []
	W1205 08:06:20.949708    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:20.949708    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:20.949708    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:21.013967    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:21.013967    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:21.053946    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:21.053946    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:21.140482    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:21.131399    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.132495    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.133361    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.136095    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:21.137526    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:21.141002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:21.141002    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:21.170239    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:21.170239    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:23.729627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:23.758686    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:23.791537    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.791594    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:23.796131    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:23.827894    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.827894    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:23.832419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:23.862718    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.862718    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:23.867837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:23.896272    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.896272    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:23.900193    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:23.929016    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.929078    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:23.932778    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:23.962372    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.962447    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:23.966147    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:23.998472    6576 logs.go:282] 0 containers: []
	W1205 08:06:23.998472    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:24.004351    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:24.033564    6576 logs.go:282] 0 containers: []
	W1205 08:06:24.033564    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:24.033564    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:24.033564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:24.099505    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:24.099505    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:24.139900    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:24.139900    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:24.233474    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:24.224899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.225899    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.228678    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.229782    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:24.230895    7747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:24.233474    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:24.233474    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:24.263408    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:24.263408    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:26.816321    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:26.841457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:26.872936    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.872992    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:26.876345    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:26.908512    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.908580    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:26.912736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:26.944068    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.944068    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:26.947603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:26.975323    6576 logs.go:282] 0 containers: []
	W1205 08:06:26.975360    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:26.978941    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:27.008708    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.008751    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:27.012371    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:27.044160    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.044225    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:27.047780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:27.078172    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.078172    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:27.081803    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:27.111287    6576 logs.go:282] 0 containers: []
	W1205 08:06:27.111370    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:27.111370    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:27.111435    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:27.161265    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:27.161329    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:27.221473    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:27.221473    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:27.263907    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:27.263907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:27.357876    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:27.345749    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.346908    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.348249    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.352136    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:27.353079    7922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:27.357876    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:27.357876    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:29.890252    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:29.916690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:29.946274    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.946274    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:29.950679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:29.979149    6576 logs.go:282] 0 containers: []
	W1205 08:06:29.979149    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:29.982229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:30.010085    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.010085    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:30.014016    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:30.043254    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.043254    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:30.048048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:30.080613    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.080613    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:30.084300    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:30.114627    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.114627    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:30.118584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:30.147947    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.148009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:30.151166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:30.180743    6576 logs.go:282] 0 containers: []
	W1205 08:06:30.180828    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:30.180828    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:30.180828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:30.244646    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:30.244646    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:30.286079    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:30.286079    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:30.376557    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:30.366006    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.367121    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.368987    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.370023    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:30.372180    8068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:30.376557    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:30.376557    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:30.405737    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:30.405737    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:32.958550    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:32.987728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:33.018308    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.018370    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:33.022062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:33.052435    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.052435    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:33.056434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:33.085355    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.085426    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:33.089343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:33.121676    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.121737    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:33.125504    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:33.157765    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.157765    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:33.161892    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:33.191061    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.191061    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:33.194930    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:33.223173    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.223173    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:33.226650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:33.257481    6576 logs.go:282] 0 containers: []
	W1205 08:06:33.257481    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:33.257481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:33.257481    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:33.301467    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:33.301467    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:33.389528    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:33.379765    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.380723    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.382170    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.383299    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:33.384532    8231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:33.389528    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:33.389528    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:33.418631    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:33.418631    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:33.465106    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:33.465185    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.034296    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:36.063459    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:36.095210    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.095210    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:36.098565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:36.127708    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.127786    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:36.131615    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:36.159964    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.159964    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:36.163771    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:36.192604    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.192604    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:36.196679    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:36.224877    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.224958    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:36.228553    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:36.258280    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.258280    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:36.261911    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:36.294140    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.294140    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:36.298273    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:36.329657    6576 logs.go:282] 0 containers: []
	W1205 08:06:36.329657    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:36.329657    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:36.329657    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:36.387784    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:36.387784    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:36.452385    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:36.452385    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:36.493394    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:36.493394    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:36.591485    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:36.580656    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.581662    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.583757    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.584584    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:36.585940    8418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:36.591485    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:36.591567    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.124474    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:39.152578    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:39.183392    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.183392    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:39.187028    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:39.216193    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.216193    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:39.219743    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:39.251680    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.251759    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:39.255869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:39.283843    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.283843    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:39.287237    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:39.316021    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.316021    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:39.319015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:39.349194    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.349194    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:39.352951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:39.403729    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.403729    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:39.411012    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:39.442909    6576 logs.go:282] 0 containers: []
	W1205 08:06:39.442909    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:39.442909    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:39.442909    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:39.509174    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:39.509174    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:39.550483    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:39.550483    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:39.650354    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:39.636654    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.641652    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.643241    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.644481    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:39.645410    8573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:39.650354    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:39.650354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:39.676786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:39.676786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.228069    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:42.258786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:42.290791    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.290791    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:42.294739    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:42.326094    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.326094    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:42.329725    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:42.356052    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.356052    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:42.359752    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:42.390464    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.390464    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:42.393935    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:42.421882    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.421882    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:42.426609    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:42.457036    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.457036    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:42.460988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:42.486064    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.486064    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:42.491250    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:42.521748    6576 logs.go:282] 0 containers: []
	W1205 08:06:42.521748    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:42.521748    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:42.521748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:42.551195    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:42.552197    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:42.613626    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:42.613683    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:42.678856    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:42.679856    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:42.719297    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:42.719297    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:42.811034    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:42.801788    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.802863    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.803799    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.804817    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:42.806589    8754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:45.316640    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:45.343574    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:45.372899    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.372899    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:45.376229    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:45.408264    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.408264    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:45.412119    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:45.440697    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.440697    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:45.444501    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:45.471692    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.471727    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:45.475496    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:45.508400    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.508450    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:45.512541    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:45.544177    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.544233    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:45.548858    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:45.579165    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.579165    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:45.582164    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:45.623052    6576 logs.go:282] 0 containers: []
	W1205 08:06:45.623052    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:45.623052    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:45.623052    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:45.651554    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:45.651554    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:45.701716    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:45.701768    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:45.766248    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:45.766248    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:45.806341    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:45.806341    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:45.895675    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:45.887090    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.887957    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.889635    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.891227    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:45.892420    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.401571    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:48.432481    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:48.466418    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.466418    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:48.471424    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:48.503617    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.503617    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:48.507677    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:48.541480    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.541480    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:48.547529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:48.579177    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.579177    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:48.585087    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:48.626465    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.626465    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:48.630533    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:48.660304    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.660304    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:48.663999    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:48.694957    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.694957    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:48.699665    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:48.725908    6576 logs.go:282] 0 containers: []
	W1205 08:06:48.725908    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:48.725908    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:48.725908    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:48.817395    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:48.808728    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.809954    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.811269    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.812666    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:48.813960    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:48.817466    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:48.817466    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:48.848226    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:48.848739    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:48.900060    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:48.900060    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:48.962797    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:48.962797    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.508647    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:51.536278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:51.573226    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.573323    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:51.578061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:51.614603    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.614603    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:51.619576    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:51.647095    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.647095    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:51.652535    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:51.680320    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.680369    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:51.684269    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:51.717798    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.717827    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:51.721877    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:51.750482    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.750482    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:51.754602    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:51.786216    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.786216    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:51.790834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:51.819030    6576 logs.go:282] 0 containers: []
	W1205 08:06:51.819030    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:51.819030    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:51.819030    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:51.876069    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:51.876110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:51.938469    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:51.938469    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:51.980953    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:51.980953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:52.079938    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:52.071074    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.072315    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.073508    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.074698    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:52.077127    9242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:52.079938    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:52.079938    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:54.616891    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:54.642146    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:54.675691    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.675691    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:54.679440    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:54.709522    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.709522    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:54.713343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:54.744053    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.744112    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:54.748148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:54.782163    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.782232    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:54.786128    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:54.817067    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.817067    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:54.820867    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:54.850003    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.850003    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:54.854439    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:54.882517    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.882566    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:54.886475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:54.917057    6576 logs.go:282] 0 containers: []
	W1205 08:06:54.917057    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:54.917057    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:54.917057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:54.982333    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:54.982333    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:55.023534    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:55.023534    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:55.136747    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:55.123502    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.124559    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.126082    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.128856    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:55.130269    9389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:55.136823    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:55.136823    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:55.169237    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:55.169237    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:06:57.723958    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:06:57.750382    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:06:57.784932    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.784932    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:06:57.788837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:06:57.815350    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.815350    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:06:57.819773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:06:57.850513    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.850513    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:06:57.854585    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:06:57.885405    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.885405    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:06:57.889340    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:06:57.917143    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.917143    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:06:57.921061    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:06:57.947843    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.947843    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:06:57.951577    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:06:57.983169    6576 logs.go:282] 0 containers: []
	W1205 08:06:57.983169    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:06:57.986925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:06:58.016381    6576 logs.go:282] 0 containers: []
	W1205 08:06:58.016381    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:06:58.016381    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:06:58.016381    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:06:58.081766    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:06:58.081766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:06:58.122021    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:06:58.122021    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:06:58.216654    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:06:58.206525    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.207866    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.208979    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.210154    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:06:58.211365    9554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:06:58.216654    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:06:58.216654    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:06:58.245369    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:06:58.245369    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:00.814255    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:00.841335    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:00.870336    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.870336    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:00.874294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:00.905321    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.905321    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:00.908814    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:00.940896    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.940896    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:00.944651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:00.975783    6576 logs.go:282] 0 containers: []
	W1205 08:07:00.975855    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:00.979485    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:01.007166    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.007166    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:01.011052    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:01.038708    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.038708    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:01.043766    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:01.072944    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.072944    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:01.076562    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:01.104574    6576 logs.go:282] 0 containers: []
	W1205 08:07:01.104623    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:01.104665    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:01.104665    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:01.169748    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:01.169748    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:01.210259    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:01.210259    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:01.310310    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:01.293458    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.302627    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.303848    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.304980    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:01.306049    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:01.310310    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:01.310310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:01.336589    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:01.336589    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:03.889510    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:03.919078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:03.953291    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.953291    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:03.956276    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:03.986975    6576 logs.go:282] 0 containers: []
	W1205 08:07:03.986975    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:03.991157    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:04.022935    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.022935    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:04.026117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:04.058273    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.058312    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:04.061868    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:04.093136    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.093136    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:04.096666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:04.122322    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.122349    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:04.126167    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:04.158513    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.158545    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:04.161969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:04.190492    6576 logs.go:282] 0 containers: []
	W1205 08:07:04.190569    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:04.190569    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:04.190569    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:04.259062    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:04.259062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:04.299558    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:04.299558    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:04.393556    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:04.380132    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.380915    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.387013    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.388309    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:04.389163    9894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:04.393644    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:04.393644    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:04.420122    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:04.420122    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:06.976110    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:07.001980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:07.033975    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.033975    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:07.040090    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:07.069823    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.069823    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:07.074015    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:07.103072    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.103072    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:07.107448    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:07.138770    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.138770    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:07.142987    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:07.174660    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.174660    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:07.178913    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:07.209719    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.209719    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:07.215472    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:07.243539    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.243539    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:07.248737    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:07.279448    6576 logs.go:282] 0 containers: []
	W1205 08:07:07.279448    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:07.279448    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:07.279448    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:07.345481    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:07.346489    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:07.384275    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:07.384275    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:07.479588    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:07.468905   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.469966   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.471760   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473059   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:07.473787   10055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:07.479588    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:07.479588    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:07.506786    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:07.506786    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:10.078099    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:10.103951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:10.139034    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.139034    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:10.142691    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:10.174629    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.174629    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:10.178323    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:10.206817    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.206817    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:10.210968    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:10.239729    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.239820    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:10.245043    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:10.277712    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.277712    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:10.283741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:10.315362    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.315362    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:10.318268    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:10.346693    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.346693    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:10.350670    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:10.379081    6576 logs.go:282] 0 containers: []
	W1205 08:07:10.379081    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:10.379081    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:10.379081    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:10.443299    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:10.443299    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:10.482497    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:10.482497    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:10.567024    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:10.557516   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.559649   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.560652   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.561768   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:10.562890   10222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:10.567024    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:10.567024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:10.596635    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:10.596635    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:13.157670    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:13.186965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:13.222698    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.222730    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:13.226690    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:13.261914    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.261957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:13.265780    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:13.294590    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.294590    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:13.299066    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:13.329216    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.329216    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:13.334474    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:13.366263    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.366290    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:13.369870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:13.398379    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.398379    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:13.402396    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:13.430465    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.430465    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:13.434253    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:13.462873    6576 logs.go:282] 0 containers: []
	W1205 08:07:13.462905    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:13.462905    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:13.462949    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:13.525954    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:13.526955    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:13.566284    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:13.567284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:13.656971    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:13.646967   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.647963   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.649311   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.651420   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:13.652532   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:13.656971    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:13.656971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:13.684284    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:13.684284    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.241440    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:16.268513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:16.302653    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.302653    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:16.306429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:16.337387    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.337387    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:16.342004    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:16.371449    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.371449    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:16.376376    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:16.406912    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.406912    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:16.410777    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:16.438875    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.438875    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:16.442983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:16.470299    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.470299    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:16.474336    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:16.504067    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.504067    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:16.508174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:16.536869    6576 logs.go:282] 0 containers: []
	W1205 08:07:16.536869    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:16.536869    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:16.536869    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:16.624673    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:16.614309   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.615561   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.617384   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.619541   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:16.620393   10541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:16.624703    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:16.624755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:16.653894    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:16.653894    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:16.701985    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:16.701985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:16.763148    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:16.763148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.307232    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:19.334513    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:19.371034    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.371140    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:19.375038    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:19.403110    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.403186    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:19.407168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:19.435904    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.435904    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:19.440294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:19.470700    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.470700    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:19.474611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:19.502846    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.502915    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:19.506400    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:19.540483    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.540483    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:19.544695    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:19.576470    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.576501    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:19.579834    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:19.609587    6576 logs.go:282] 0 containers: []
	W1205 08:07:19.609587    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:19.609587    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:19.609587    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:19.653000    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:19.653000    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:19.747787    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:19.739799   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.741016   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.742113   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.743293   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:19.744451   10707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:19.747787    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:19.747787    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:19.774804    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:19.774804    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:19.825222    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:19.825338    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.394074    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:22.419163    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:22.454202    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.454202    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:22.457716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:22.487462    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.487615    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:22.491427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:22.522398    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.522398    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:22.526148    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:22.554536    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.554536    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:22.558447    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:22.590329    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.590401    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:22.595088    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:22.626553    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.626553    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:22.630372    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:22.658911    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.658911    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:22.662715    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:22.692369    6576 logs.go:282] 0 containers: []
	W1205 08:07:22.692444    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:22.692468    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:22.692468    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:22.759391    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:22.759391    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:22.801415    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:22.801415    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:22.891643    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:22.881338   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.883456   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.887030   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.888265   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:22.889355   10868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:22.891710    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:22.891738    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:22.922662    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:22.922662    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:25.480645    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:25.506403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:25.536534    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.536600    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:25.540233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:25.568373    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.568373    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:25.572581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:25.604196    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.604196    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:25.608476    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:25.639923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.640007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:25.643813    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:25.673923    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.673923    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:25.677542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:25.709156    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.709156    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:25.712910    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:25.744371    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.744371    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:25.750463    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:25.778113    6576 logs.go:282] 0 containers: []
	W1205 08:07:25.778113    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:25.778113    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:25.778113    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:25.842953    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:25.842953    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:25.881310    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:25.881310    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:25.976920    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:25.964944   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.966342   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.968369   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.969905   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:25.970655   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:25.976920    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:25.976920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:26.005828    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:26.005889    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:28.568522    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:28.594981    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:28.628025    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.628025    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:28.631569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:28.661047    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.661047    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:28.664662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:28.692667    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.692667    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:28.696624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:28.725878    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.725944    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:28.730056    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:28.758073    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.758129    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:28.761794    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:28.788812    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.788812    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:28.793030    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:28.839778    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.839778    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:28.843937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:28.873288    6576 logs.go:282] 0 containers: []
	W1205 08:07:28.873288    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:28.873288    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:28.873288    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:28.937414    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:28.937414    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:28.975610    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:28.975610    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:29.110286    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:29.068093   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.099868   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.101288   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.103705   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:29.105454   11203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:29.110286    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:29.110286    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:29.140120    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:29.140120    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:31.695315    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:31.723717    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:31.755093    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.755155    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:31.758672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:31.786260    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.786260    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:31.790917    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:31.817450    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.817450    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:31.822438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:31.852769    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.852788    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:31.856218    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:31.885715    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.885715    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:31.890036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:31.919240    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.919240    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:31.924888    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:31.956860    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.956860    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:31.960848    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:31.989055    6576 logs.go:282] 0 containers: []
	W1205 08:07:31.989055    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:31.989055    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:31.989055    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:32.055751    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:32.055751    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:32.091848    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:32.091848    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:32.183494    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:32.172400   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.173483   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.174469   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.175868   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:32.177099   11376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:32.183494    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:32.183494    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:32.211020    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:32.211056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:34.770702    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:34.796134    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:34.830020    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.830052    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:34.833506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:34.860829    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.860829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:34.864718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:34.895302    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.895302    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:34.899305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:34.928933    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.928933    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:34.935599    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:34.964256    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.964280    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:34.967945    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:34.995571    6576 logs.go:282] 0 containers: []
	W1205 08:07:34.995571    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:35.001155    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:35.038603    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.038603    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:35.042249    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:35.075025    6576 logs.go:282] 0 containers: []
	W1205 08:07:35.075025    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:35.075025    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:35.075025    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:35.136020    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:35.136020    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:35.198233    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:35.198233    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:35.236713    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:35.236713    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:35.327635    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:35.315598   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.316759   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.320319   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.322127   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:35.323353   11549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:35.327659    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:35.327659    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:37.859618    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:37.890074    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:37.922724    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.922724    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:37.926571    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:37.959720    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.959720    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:37.963770    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:37.991602    6576 logs.go:282] 0 containers: []
	W1205 08:07:37.991602    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:37.995673    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:38.023771    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.023771    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:38.030170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:38.061676    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.061676    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:38.065660    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:38.116492    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.116542    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:38.122475    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:38.151483    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.151483    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:38.155624    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:38.184512    6576 logs.go:282] 0 containers: []
	W1205 08:07:38.184512    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:38.184512    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:38.184512    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:38.221972    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:38.221972    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:38.315283    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:38.304319   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.306082   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.307978   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.309605   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:38.310846   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:38.315283    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:38.315283    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:38.342209    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:38.342209    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:38.391392    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:38.391470    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:40.955418    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:40.982062    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:41.015938    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.015938    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:41.019996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:41.049917    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.049917    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:41.052925    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:41.084946    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.084946    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:41.088068    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:41.120218    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.120297    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:41.123688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:41.152948    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.152948    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:41.156508    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:41.183795    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.183795    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:41.187681    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:41.217097    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.217097    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:41.221130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:41.252354    6576 logs.go:282] 0 containers: []
	W1205 08:07:41.252354    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:41.252354    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:41.252354    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:41.345903    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:41.332593   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.336834   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.339033   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340171   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:41.340983   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:41.345903    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:41.345903    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:41.373149    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:41.373149    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:41.423553    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:41.423553    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:41.485144    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:41.485144    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.029139    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:44.056384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:44.087995    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.088078    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:44.091865    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:44.118934    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.118934    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:44.122494    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:44.150822    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.150864    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:44.154454    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:44.183401    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.183401    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:44.187086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:44.214588    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.214644    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:44.217896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:44.249548    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.249548    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:44.253290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:44.281230    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.281230    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:44.284996    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:44.314362    6576 logs.go:282] 0 containers: []
	W1205 08:07:44.314426    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:44.314426    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:44.314426    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:44.378166    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:44.378166    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:44.420024    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:44.420024    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:44.510942    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:44.501504   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.502772   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.503633   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.506343   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:44.507775   12027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:44.510942    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:44.510942    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:44.539432    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:44.539482    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:47.095962    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:47.121976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:47.155042    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.155042    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:47.159040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:47.188768    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.188768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:47.192847    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:47.220500    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.220500    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:47.224299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:47.252483    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.252483    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:47.256264    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:47.285852    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.285852    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:47.290573    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:47.319383    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.319450    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:47.323007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:47.353203    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.353203    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:47.357241    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:47.385498    6576 logs.go:282] 0 containers: []
	W1205 08:07:47.385498    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:47.385498    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:47.385498    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:47.449686    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:47.449686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:47.490407    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:47.490407    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:47.577868    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:47.566167   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.567021   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.569823   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.570745   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:47.574800   12190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:47.577868    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:47.577868    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:47.604652    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:47.604652    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:50.157279    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:50.184328    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:50.218852    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.218852    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:50.222438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:50.250551    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.250571    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:50.254169    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:50.285371    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.285424    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:50.289741    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:50.320093    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.320093    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:50.323845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:50.357038    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.357084    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:50.360291    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:50.389753    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.389829    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:50.392859    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:50.423710    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.423710    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:50.427343    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:50.454456    6576 logs.go:282] 0 containers: []
	W1205 08:07:50.454456    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:50.454456    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:50.454456    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:50.516581    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:50.516581    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:50.555412    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:50.555412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:50.648402    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:50.638282   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.639233   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.641786   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.642733   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:50.645724   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:50.648402    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:50.648402    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:50.673701    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:50.673701    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:53.230542    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:53.256707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:53.290781    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.290781    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:53.294254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:53.326261    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.326261    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:53.329838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:53.359630    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.359630    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:53.364896    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:53.396046    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.396046    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:53.400120    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:53.428713    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.428713    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:53.432409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:53.462479    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.462479    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:53.467583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:53.495306    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.495306    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:53.499565    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:53.530622    6576 logs.go:282] 0 containers: []
	W1205 08:07:53.530622    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:53.530622    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:53.530622    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:53.593183    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:53.593183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:53.633807    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:53.633807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:53.721016    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:53.712922   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.714157   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.715494   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.716874   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:53.718161   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:53.721016    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:53.721016    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:53.748333    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:53.748442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.315862    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:56.341452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:56.374032    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.374063    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:56.377843    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:56.408635    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.408698    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:56.412330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:56.442083    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.442083    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:56.445380    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:56.473679    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.473749    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:56.477263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:56.506107    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.506156    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:56.510975    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:56.538958    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.539022    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:56.542581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:56.572303    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.572303    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:56.576375    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:56.604073    6576 logs.go:282] 0 containers: []
	W1205 08:07:56.604073    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:56.604073    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:56.604145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:56.641552    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:56.641552    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:56.734944    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:56.721878   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.722727   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.725718   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.727423   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:56.728368   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:56.735002    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:56.735046    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:56.770367    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:56.770412    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:07:56.826378    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:56.826378    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.393300    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:07:59.417617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:07:59.452220    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.452220    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:07:59.456092    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:07:59.484787    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.484787    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:07:59.488348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:07:59.516670    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.516670    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:07:59.521214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:07:59.548048    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.548048    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:07:59.551862    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:07:59.576869    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.576869    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:07:59.581825    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:07:59.610579    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.610579    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:07:59.614523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:07:59.642507    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.642507    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:07:59.646397    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:07:59.675062    6576 logs.go:282] 0 containers: []
	W1205 08:07:59.675062    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:07:59.675062    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:07:59.675062    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:07:59.739704    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:07:59.739704    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:07:59.782363    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:07:59.782363    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:07:59.876076    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:07:59.865923   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.867089   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.868088   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.870067   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:07:59.871213   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:07:59.876076    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:07:59.876076    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:07:59.903005    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:07:59.903005    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:02.456978    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:02.483895    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:02.516374    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.516374    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:02.520443    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:02.553066    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.553148    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:02.556844    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:02.585220    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.585220    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:02.589183    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:02.620655    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.620655    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:02.625389    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:02.659292    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.659369    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:02.662727    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:02.690972    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.690972    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:02.694944    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:02.723751    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.723797    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:02.727357    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:02.764750    6576 logs.go:282] 0 containers: []
	W1205 08:08:02.764750    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:02.764750    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:02.764750    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:02.834733    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:02.834733    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:02.873432    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:02.873432    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:02.963503    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:02.952119   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.955623   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.956877   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.957681   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:02.960011   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:02.963503    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:02.963503    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:02.992067    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:02.992067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:05.547340    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:05.572946    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:05.605473    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.605473    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:05.609479    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:05.639072    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.639072    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:05.642702    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:05.674126    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.674174    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:05.678318    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:05.710378    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.710378    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:05.713988    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:05.743263    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.743263    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:05.748802    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:05.777467    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.777467    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:05.781993    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:05.816147    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.816147    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:05.820044    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:05.849173    6576 logs.go:282] 0 containers: []
	W1205 08:08:05.849173    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:05.849173    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:05.849173    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:05.937771    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:05.926656   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.928398   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.929479   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.932790   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:05.933608   13189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:05.937771    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:05.937771    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:05.965110    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:05.965110    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:06.012927    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:06.012927    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:06.076287    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:06.076287    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:08.621402    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:08.647297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:08.678598    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.678679    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:08.681866    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:08.710779    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.710856    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:08.714554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:08.745379    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.745379    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:08.750135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:08.785796    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.785840    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:08.791900    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:08.823728    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.823778    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:08.827659    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:08.858652    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.858726    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:08.862304    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:08.893238    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.893287    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:08.896783    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:08.927578    6576 logs.go:282] 0 containers: []
	W1205 08:08:08.927578    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:08.927578    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:08.927578    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:08.990752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:08.990752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:09.030509    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:09.030509    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:09.116112    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:09.107888   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.108910   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110059   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.110999   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:09.111946   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:09.116629    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:09.116629    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:09.148307    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:09.148307    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:11.720341    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:11.750190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:11.784223    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.784247    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:11.789837    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:11.819184    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.819184    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:11.824438    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:11.852058    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.852058    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:11.857984    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:11.888391    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.888391    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:11.891707    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:11.921973    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.921973    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:11.925426    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:11.953845    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.953845    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:11.957863    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:11.987150    6576 logs.go:282] 0 containers: []
	W1205 08:08:11.987236    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:11.990921    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:12.018843    6576 logs.go:282] 0 containers: []
	W1205 08:08:12.018895    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:12.018895    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:12.018918    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:12.048523    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:12.048523    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:12.099490    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:12.099490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:12.163368    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:12.163368    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:12.204867    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:12.204867    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:12.290894    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:12.282216   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.283800   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.284871   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.285647   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:12.287650   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:14.795945    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:14.821749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:14.851399    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.851399    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:14.855010    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:14.887370    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.887370    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:14.891117    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:14.922139    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.922139    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:14.926245    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:14.954095    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.954095    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:14.959551    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:14.987564    6576 logs.go:282] 0 containers: []
	W1205 08:08:14.987564    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:14.991080    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:15.023941    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.023941    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:15.027344    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:15.056411    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.056474    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:15.059417    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:15.092400    6576 logs.go:282] 0 containers: []
	W1205 08:08:15.092400    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:15.092400    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:15.092400    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:15.119932    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:15.119932    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:15.169067    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:15.169067    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:15.232603    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:15.232603    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:15.276106    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:15.276106    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:15.363421    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:15.350798   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.356353   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.357901   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.358812   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:15.361180   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:17.870108    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:17.895889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:17.927528    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.927528    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:17.931166    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:17.959105    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.959105    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:17.962846    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:17.994011    6576 logs.go:282] 0 containers: []
	W1205 08:08:17.994011    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:17.998047    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:18.026606    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.026677    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:18.030234    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:18.061389    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.061389    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:18.065290    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:18.096454    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.096454    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:18.100320    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:18.129213    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.129213    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:18.133040    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:18.160088    6576 logs.go:282] 0 containers: []
	W1205 08:08:18.160111    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:18.160111    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:18.160111    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:18.221228    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:18.221228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:18.258886    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:18.258886    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:18.348416    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:18.339981   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.341081   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.342329   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.343581   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:18.344791   13851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:18.348496    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:18.348525    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:18.379855    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:18.379855    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:20.936239    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:20.959002    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:20.990013    6576 logs.go:282] 0 containers: []
	W1205 08:08:20.990085    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:20.993773    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:21.021884    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.021925    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:21.025964    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:21.054531    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.054531    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:21.058277    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:21.088997    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.089078    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:21.092631    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:21.121326    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.121360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:21.125135    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:21.160429    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.160496    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:21.164226    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:21.192488    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.192557    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:21.196294    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:21.228406    6576 logs.go:282] 0 containers: []
	W1205 08:08:21.228445    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:21.228445    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:21.228495    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:21.291604    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:21.292600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:21.331218    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:21.331218    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:21.412454    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:21.404285   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.405161   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.406580   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.407992   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:21.410585   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:21.412454    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:21.412454    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:21.441164    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:21.441229    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:23.994395    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:24.020275    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:24.054682    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.054682    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:24.058674    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:24.089654    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.089654    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:24.093569    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:24.123224    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.123224    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:24.127942    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:24.155350    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.155350    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:24.159192    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:24.192652    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.192652    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:24.197194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:24.229851    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.229851    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:24.233957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:24.262158    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.262158    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:24.266478    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:24.297683    6576 logs.go:282] 0 containers: []
	W1205 08:08:24.297766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:24.297766    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:24.297766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:24.388464    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:24.379634   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.380768   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.381987   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.384259   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:24.385347   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:24.388464    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:24.388464    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:24.416764    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:24.416764    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:24.468678    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:24.469203    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:24.532678    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:24.532678    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.075175    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:27.104797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:27.137440    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.137440    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:27.141581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:27.171103    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.171126    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:27.174625    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:27.205068    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.205102    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:27.208711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:27.237765    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.237806    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:27.241719    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:27.269838    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.269838    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:27.273353    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:27.300835    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.300835    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:27.304633    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:27.333062    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.333062    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:27.338523    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:27.366572    6576 logs.go:282] 0 containers: []
	W1205 08:08:27.366572    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:27.366572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:27.366572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:27.402514    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:27.402514    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:27.499452    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:27.485333   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.486352   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.489518   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.491069   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:27.492814   14330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:27.499452    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:27.499452    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:27.528089    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:27.528089    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:27.596881    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:27.596881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.168154    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:30.194986    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:30.228709    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.228709    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:30.233961    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:30.268256    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.268256    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:30.271667    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:30.300456    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.300519    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:30.303870    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:30.335955    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.335955    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:30.339590    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:30.367829    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.367829    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:30.373123    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:30.401294    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.401327    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:30.404974    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:30.436526    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.436526    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:30.440246    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:30.478544    6576 logs.go:282] 0 containers: []
	W1205 08:08:30.478599    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:30.478599    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:30.478651    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:30.544716    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:30.544716    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:30.584496    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:30.584496    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:30.671308    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:30.658597   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.660972   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.662159   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.663815   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:30.665286   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:30.671352    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:30.671352    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:30.699029    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:30.699029    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:33.251744    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:33.280500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:33.311912    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.311912    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:33.316407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:33.347966    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.347966    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:33.351341    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:33.386249    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.386249    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:33.389828    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:33.420571    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.420571    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:33.423584    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:33.450599    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.450599    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:33.453949    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:33.488480    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.488480    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:33.492797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:33.523382    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.523382    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:33.526929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:33.561860    6576 logs.go:282] 0 containers: []
	W1205 08:08:33.561860    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:33.561860    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:33.561860    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:33.628425    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:33.628425    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:33.666453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:33.666453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:33.756872    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:33.744743   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.746140   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.747219   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.749788   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:33.751052   14672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:33.756872    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:33.756872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:33.785780    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:33.785780    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:36.342322    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:36.368238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:36.399529    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.399529    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:36.402710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:36.430561    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.430561    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:36.434233    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:36.461894    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.461894    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:36.466270    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:36.492354    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.492354    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:36.495668    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:36.526818    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.526818    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:36.530606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:36.564752    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.564752    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:36.569130    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:36.598403    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.598403    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:36.603579    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:36.635757    6576 logs.go:282] 0 containers: []
	W1205 08:08:36.635757    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:36.635757    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:36.635757    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:36.702715    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:36.702715    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:36.740740    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:36.740740    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:36.827779    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:36.815168   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.816087   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.818808   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.820365   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:36.823209   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:36.827779    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:36.827779    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:36.855113    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:36.855148    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.404078    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:39.428626    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:39.461540    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.461540    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:39.465369    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:39.497259    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.497368    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:39.501168    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:39.532526    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.532526    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:39.537388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:39.570114    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.570114    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:39.574332    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:39.607392    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.607392    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:39.611100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:39.640933    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.640933    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:39.644381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:39.673224    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.673224    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:39.678235    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:39.706766    6576 logs.go:282] 0 containers: []
	W1205 08:08:39.706766    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:39.706766    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:39.706766    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:39.734527    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:39.734527    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:39.787138    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:39.787138    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:39.849637    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:39.849637    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:39.889331    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:39.889331    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:39.977390    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:39.965131   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.966056   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.969346   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.971002   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:39.972426   15008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:42.481792    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:42.508550    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:42.541632    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.541632    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:42.545635    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:42.595829    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.595829    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:42.601196    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:42.630888    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.630888    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:42.634929    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:42.665451    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.665451    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:42.668581    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:42.701244    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.701244    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:42.705368    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:42.737250    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.737250    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:42.740441    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:42.766622    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.766700    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:42.770278    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:42.801486    6576 logs.go:282] 0 containers: []
	W1205 08:08:42.801486    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:42.801486    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:42.801486    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:42.866794    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:42.866930    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:42.906819    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:42.906819    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:43.000226    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:42.986999   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.987824   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.992535   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.993702   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:42.994447   15157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:43.000226    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:43.000226    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:43.027011    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:43.027011    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:45.586794    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:45.615024    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:45.642666    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.642666    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:45.646348    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:45.675867    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.675867    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:45.679650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:45.711785    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.711785    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:45.717449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:45.750065    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.750109    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:45.753406    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:45.782908    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.782908    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:45.786362    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:45.816309    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.816309    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:45.819889    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:45.847629    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.847656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:45.850622    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:45.880676    6576 logs.go:282] 0 containers: []
	W1205 08:08:45.880733    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:45.880759    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:45.880759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:45.943843    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:45.943843    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:45.984212    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:45.984212    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:46.071821    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:46.060605   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.061646   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.062901   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.064463   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:46.065460   15318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:46.071821    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:46.071821    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:46.098280    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:46.098280    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:48.651285    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:48.676952    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:48.706696    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.706696    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:48.710427    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:48.738766    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.738766    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:48.746145    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:48.773486    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.773486    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:48.778542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:48.805908    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.805908    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:48.809817    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:48.840360    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.840360    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:48.843723    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:48.871560    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.871560    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:48.875316    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:48.903556    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.903556    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:48.908924    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:48.938455    6576 logs.go:282] 0 containers: []
	W1205 08:08:48.938455    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:48.938455    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:48.938455    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:49.001951    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:49.001951    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:49.042098    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:49.042098    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:49.131350    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:49.120438   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.121754   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.123116   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.124524   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:49.125836   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:49.131350    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:49.131350    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:49.166759    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:49.166759    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:51.724851    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:51.752650    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:51.780528    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.780542    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:51.784422    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:51.816577    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.816577    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:51.819989    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:51.849244    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.849244    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:51.853211    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:51.881159    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.881222    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:51.884831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:51.917237    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.917237    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:51.921202    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:51.951018    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.951018    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:51.955222    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:51.982262    6576 logs.go:282] 0 containers: []
	W1205 08:08:51.982262    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:51.986170    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:52.013482    6576 logs.go:282] 0 containers: []
	W1205 08:08:52.013526    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:52.013564    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:52.013564    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:52.050334    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:52.050334    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:52.144178    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:52.133526   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.134871   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.136142   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.137800   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:52.139220   15641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:52.144178    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:52.144178    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:52.171135    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:52.171135    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:52.223993    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:52.223993    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:54.792613    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:54.817042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:54.848768    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.848768    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:54.852580    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:54.881045    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.881045    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:54.885194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:54.915368    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.915368    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:54.919753    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:54.952592    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.952679    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:54.956477    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:54.989304    6576 logs.go:282] 0 containers: []
	W1205 08:08:54.989357    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:54.992976    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:55.025855    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.025855    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:55.029407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:55.059218    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.059290    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:55.063529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:55.092992    6576 logs.go:282] 0 containers: []
	W1205 08:08:55.092992    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:55.092992    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:55.092992    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:55.201249    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:55.191114   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.192097   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.193360   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.194595   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:55.195561   15797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:55.201249    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:55.201249    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:55.228877    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:55.228907    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:08:55.286872    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:55.286872    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:55.357844    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:55.357844    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:57.912434    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:08:57.938621    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:08:57.968927    6576 logs.go:282] 0 containers: []
	W1205 08:08:57.968927    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:08:57.975548    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:08:58.003200    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.003200    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:08:58.006983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:08:58.037886    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.037886    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:08:58.041594    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:08:58.072037    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.072037    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:08:58.076711    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:08:58.118201    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.118201    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:08:58.122059    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:08:58.150468    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.150468    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:08:58.154554    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:08:58.186009    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.186009    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:08:58.189676    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:08:58.219204    6576 logs.go:282] 0 containers: []
	W1205 08:08:58.219204    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:08:58.219204    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:08:58.219204    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:08:58.283572    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:08:58.283572    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:08:58.322291    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:08:58.322291    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:08:58.406023    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:08:58.395756   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.396947   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.398267   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.399561   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:08:58.400843   15978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:08:58.406023    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:08:58.406023    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:08:58.434361    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:08:58.434881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:00.986031    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:01.012520    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:01.041860    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.041860    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:01.045736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:01.074168    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.074168    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:01.081136    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:01.115160    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.115160    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:01.121214    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:01.152200    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.152200    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:01.155786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:01.187849    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.187849    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:01.193651    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:01.220927    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.220927    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:01.225251    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:01.262648    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.262648    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:01.266549    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:01.298388    6576 logs.go:282] 0 containers: []
	W1205 08:09:01.298388    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:01.298459    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:01.298491    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:01.389098    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:01.377026   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.377856   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.379921   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.380630   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:01.384061   16147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:01.389126    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:01.389126    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:01.418232    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:01.418232    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:01.463083    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:01.463083    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:01.528159    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:01.528159    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.078505    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:04.106462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:04.136412    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.136412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:04.139845    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:04.168393    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.168465    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:04.171965    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:04.203281    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.203281    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:04.207129    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:04.235244    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.235244    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:04.239720    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:04.271746    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.271746    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:04.279903    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:04.308486    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.308486    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:04.312482    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:04.341988    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.341988    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:04.345122    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:04.378152    6576 logs.go:282] 0 containers: []
	W1205 08:09:04.378152    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:04.378152    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:04.378152    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:04.443403    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:04.443403    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:04.484661    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:04.484661    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:04.574793    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:04.560661   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.561649   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.566401   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.568432   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:04.570652   16319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:04.574793    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:04.574793    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:04.606357    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:04.606357    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.162554    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:07.194738    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:07.227905    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.227977    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:07.232048    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:07.262861    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.262861    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:07.266595    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:07.297184    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.297184    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:07.300873    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:07.331523    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.331523    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:07.335838    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:07.367893    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.367893    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:07.371282    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:07.400934    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.400934    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:07.403928    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:07.431616    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.431616    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:07.435314    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:07.469043    6576 logs.go:282] 0 containers: []
	W1205 08:09:07.469043    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:07.469043    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:07.469043    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:07.497832    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:07.497832    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:07.547846    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:07.547846    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:07.611682    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:07.611682    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:07.651105    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:07.651105    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:07.741756    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:07.730861   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.731799   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.734095   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.735203   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:07.736136   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.247138    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:10.275755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:10.311911    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.311911    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:10.317436    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:10.347243    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.347243    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:10.353296    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:10.384412    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.384412    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:10.389236    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:10.419505    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.419505    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:10.423688    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:10.451213    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.451213    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:10.457390    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:10.485001    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.485001    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:10.488370    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:10.519268    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.519268    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:10.524029    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:10.551544    6576 logs.go:282] 0 containers: []
	W1205 08:09:10.551544    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:10.551544    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:10.551544    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:10.618971    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:10.618971    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:10.657753    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:10.657753    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:10.751422    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:10.740331   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.741382   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.742135   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.746174   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:10.747103   16640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:10.751422    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:10.751422    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:10.777901    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:10.778003    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.340867    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:13.373007    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:13.404147    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.404191    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:13.408078    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:13.440768    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.440768    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:13.444748    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:13.474390    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.474390    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:13.478381    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:13.508004    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.508057    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:13.511749    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:13.543789    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.543789    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:13.547384    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:13.576308    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.576377    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:13.579736    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:13.609792    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.609792    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:13.613298    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:13.642091    6576 logs.go:282] 0 containers: []
	W1205 08:09:13.642091    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:13.642091    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:13.642091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:13.671624    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:13.671686    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:13.718995    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:13.718995    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:13.782056    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:13.782056    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:13.821453    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:13.821453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:13.928916    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:13.918145   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.919184   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.920131   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.922446   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:13.923724   16819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.433905    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:16.459887    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:16.496160    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.496160    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:16.499639    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:16.526877    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.526877    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:16.530750    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:16.560261    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.560261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:16.563991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:16.595914    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.595914    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:16.599869    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:16.627694    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.627694    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:16.632403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:16.660769    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.660769    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:16.664194    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:16.692707    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.692707    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:16.698036    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:16.728749    6576 logs.go:282] 0 containers: []
	W1205 08:09:16.728749    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:16.728749    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:16.728749    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:16.778953    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:16.779017    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:16.841091    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:16.841091    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:16.881145    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:16.881145    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:16.969295    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:16.959645   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.960522   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.962481   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.963671   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:16.964721   16979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:16.969332    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:16.969362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:19.502757    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:19.529429    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:19.557499    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.557499    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:19.561490    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:19.590127    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.590127    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:19.594042    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:19.622382    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.622382    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:19.626026    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:19.653513    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.653513    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:19.656672    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:19.686153    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.686153    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:19.691297    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:19.720831    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.720858    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:19.724786    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:19.751107    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.751107    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:19.754979    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:19.782999    6576 logs.go:282] 0 containers: []
	W1205 08:09:19.782999    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:19.782999    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:19.782999    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:19.844801    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:19.844801    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:19.884439    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:19.884439    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:19.977224    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:19.964996   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.968924   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.970786   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.973180   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:19.975233   17123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:19.977224    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:19.977224    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:20.007404    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:20.007404    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:22.569427    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:22.596121    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:22.628181    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.628181    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:22.632086    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:22.660848    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.660848    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:22.664755    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:22.694182    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.694261    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:22.698085    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:22.726532    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.726600    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:22.730354    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:22.757319    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.757355    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:22.760937    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:22.792791    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.792791    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:22.799388    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:22.841372    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.841372    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:22.845285    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:22.879377    6576 logs.go:282] 0 containers: []
	W1205 08:09:22.879377    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:22.879377    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:22.879377    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:22.946156    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:22.946156    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:22.990461    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:22.990461    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:23.119453    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:23.109436   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.110223   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.112884   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.115261   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:23.117081   17298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:23.119453    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:23.119453    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:23.146199    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:23.147241    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:25.703191    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:25.728570    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:25.758884    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.758884    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:25.765071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:25.792957    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.792957    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:25.796556    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:25.825466    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.825466    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:25.828728    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:25.857451    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.857521    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:25.861306    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:25.887700    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.887700    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:25.891071    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:25.920875    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.920875    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:25.924452    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:25.952908    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.952952    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:25.956305    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:25.987608    6576 logs.go:282] 0 containers: []
	W1205 08:09:25.987608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:25.987608    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:25.987608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:26.027162    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:26.027162    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:26.120245    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:26.107417   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.108200   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.112823   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.113923   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:26.114975   17463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:26.120245    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:26.120245    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:26.147670    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:26.147697    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:26.198923    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:26.198963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:28.769076    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:28.797716    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:28.829859    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.829898    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:28.833257    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:28.864507    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.864507    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:28.868407    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:28.898827    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.898827    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:28.902971    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:28.933087    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.933087    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:28.937063    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:28.964140    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.964140    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:28.968403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:28.997620    6576 logs.go:282] 0 containers: []
	W1205 08:09:28.997620    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:29.001779    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:29.035745    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.035745    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:29.038757    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:29.068429    6576 logs.go:282] 0 containers: []
	W1205 08:09:29.068429    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:29.068429    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:29.068429    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:29.124688    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:29.124688    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:29.188675    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:29.188675    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:29.227887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:29.227887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:29.312828    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:29.301515   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.302784   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.303557   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.306066   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:29.307186   17647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:29.312828    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:29.312828    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:31.845911    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:31.878797    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:31.916523    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.916523    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:31.919583    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:31.950914    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.950976    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:31.954687    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:31.983555    6576 logs.go:282] 0 containers: []
	W1205 08:09:31.983580    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:31.987603    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:32.021007    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.021007    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:32.025190    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:32.056980    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.057033    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:32.060500    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:32.104780    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.104780    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:32.108815    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:32.135429    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.135494    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:32.138969    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:32.171260    6576 logs.go:282] 0 containers: []
	W1205 08:09:32.171260    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:32.171260    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:32.171260    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:32.237752    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:32.237752    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:32.277887    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:32.277887    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:32.365810    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:32.355223   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.356563   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.358244   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.359525   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:32.360794   17796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:32.365810    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:32.365810    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:32.392252    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:32.392252    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:34.943627    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:34.969529    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:35.010672    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.010672    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:35.015462    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:35.048036    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.048036    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:35.055991    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:35.103005    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.103005    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:35.106890    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:35.137906    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.137906    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:35.141530    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:35.172625    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.172625    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:35.176175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:35.209474    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.209474    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:35.213175    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:35.244787    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.244787    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:35.248557    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:35.275127    6576 logs.go:282] 0 containers: []
	W1205 08:09:35.275158    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:35.275158    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:35.275158    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:35.334298    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:35.334298    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:35.373969    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:35.373969    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:35.459656    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:35.448655   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.449567   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.451473   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.452624   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:35.453549   17956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:35.459755    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:35.459755    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:35.489057    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:35.489057    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:38.049404    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:38.073507    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:38.101267    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.101337    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:38.104951    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:38.134276    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.134276    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:38.139127    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:38.166437    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.166437    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:38.170518    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:38.199145    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.199145    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:38.202760    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:38.230466    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.230466    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:38.233640    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:38.263867    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.263867    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:38.267542    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:38.297791    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.297791    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:38.301874    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:38.332980    6576 logs.go:282] 0 containers: []
	W1205 08:09:38.332980    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:38.332980    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:38.332980    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:38.396086    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:38.396086    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:38.433018    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:38.433018    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:38.516847    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:38.505052   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.505960   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.507542   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.510778   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:38.512682   18114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:38.516847    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:38.516847    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:38.545985    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:38.545985    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:41.097758    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:41.125607    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:41.156423    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.156423    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:41.159823    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:41.188324    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.188383    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:41.192299    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:41.224751    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.224789    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:41.228655    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:41.257790    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.257790    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:41.261606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:41.292935    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.292999    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:41.296487    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:41.322728    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.322728    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:41.326980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:41.355569    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.355569    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:41.359412    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:41.388228    6576 logs.go:282] 0 containers: []
	W1205 08:09:41.388228    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:41.388228    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:41.388228    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:41.454094    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:41.454094    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:41.492536    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:41.492536    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:41.584848    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:41.573928   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.575115   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.576782   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.579176   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:41.580576   18274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:41.584892    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:41.584892    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:41.611807    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:41.611807    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:44.169483    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:44.196254    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:44.224412    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.224412    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:44.229628    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:44.257724    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.257724    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:44.262355    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:44.289872    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.289926    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:44.293506    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:44.321891    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.321891    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:44.325045    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:44.354424    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.354424    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:44.357980    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:44.388960    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.388960    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:44.392224    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:44.424484    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.424484    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:44.427710    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:44.458834    6576 logs.go:282] 0 containers: []
	W1205 08:09:44.458834    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:44.458834    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:44.458834    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:44.523336    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:44.523336    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:44.560362    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:44.560362    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:44.656711    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:44.646635   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.647917   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.648725   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.650985   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:44.652340   18432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:44.656711    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:44.656711    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:44.682009    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:44.683010    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.243380    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:47.270606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:47.302678    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.302720    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:47.305835    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:47.334169    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.334213    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:47.338162    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:47.370622    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.370693    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:47.374238    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:47.406764    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.406787    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:47.410449    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:47.439290    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.439332    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:47.442816    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:47.475239    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.475239    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:47.479100    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:47.510196    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.510196    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:47.513831    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:47.543315    6576 logs.go:282] 0 containers: []
	W1205 08:09:47.543378    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:47.543378    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:47.543411    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:47.577600    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:47.577600    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:47.651517    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:47.651517    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:47.717530    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:47.717530    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:47.757989    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:47.757989    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:47.848615    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:47.839056   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.840986   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.842403   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.843197   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:47.845464   18616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:50.354473    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:50.381662    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:50.410303    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.410303    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:50.416210    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:50.443479    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.443479    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:50.447606    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:50.475214    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.475214    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:50.479409    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:50.508984    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.508984    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:50.513185    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:50.544532    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.544532    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:50.548200    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:50.578350    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.578350    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:50.583137    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:50.615656    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.615656    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:50.619983    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:50.649117    6576 logs.go:282] 0 containers: []
	W1205 08:09:50.649117    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:50.649117    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:50.649117    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:50.678837    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:50.678837    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:50.730963    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:50.730963    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:50.797442    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:50.797442    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:50.839051    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:50.840050    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:50.934073    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:50.923616   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.924540   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.926912   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.928301   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:50.929210   18783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.440116    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:53.465957    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:53.497390    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.497462    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:53.501077    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:53.529488    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.529488    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:53.536331    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:53.563367    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.563367    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:53.566361    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:53.596894    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.596894    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:53.600611    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:53.630623    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.630623    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:53.634434    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:53.664123    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.664123    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:53.668403    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:53.697948    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.697948    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:53.701419    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:53.730378    6576 logs.go:282] 0 containers: []
	W1205 08:09:53.730462    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:53.730462    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:53.730462    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:53.798465    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:53.798465    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:53.841124    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:53.841124    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:53.935344    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:53.926933   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.927894   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.929369   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.931036   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:53.933003   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:53.936318    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:53.936318    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:53.965040    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:53.965040    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:56.520907    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:56.551718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:56.584506    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.584506    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:56.588065    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:56.618214    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.618214    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:56.622199    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:56.650798    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.650798    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:56.654367    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:56.685409    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.685440    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:56.688781    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:56.719049    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.719163    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:56.722810    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:56.753646    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.753646    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:56.757666    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:56.793942    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.793942    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:56.798049    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:56.827315    6576 logs.go:282] 0 containers: []
	W1205 08:09:56.827315    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:56.827315    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:56.827315    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:56.893213    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:56.893213    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:09:56.931234    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:09:56.931234    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:09:57.020142    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:09:57.009228   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.010188   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.011440   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.012840   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:09:57.014657   19099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:09:57.020142    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:09:57.020142    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:09:57.048871    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:09:57.048871    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:09:59.606004    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:09:59.632524    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:09:59.662177    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.662177    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:09:59.666311    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:09:59.701152    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.701202    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:09:59.704398    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:09:59.733278    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.733278    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:09:59.738174    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:09:59.769038    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.769038    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:09:59.773266    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:09:59.814259    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.814259    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:09:59.818330    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:09:59.848066    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.848066    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:09:59.851684    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:09:59.880029    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.880029    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:09:59.884457    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:09:59.914608    6576 logs.go:282] 0 containers: []
	W1205 08:09:59.914608    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:09:59.914608    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:09:59.914608    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:09:59.978490    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:09:59.978490    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:00.018881    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:00.018881    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:00.109744    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:00.098063   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.099309   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.100170   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.102815   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:00.103661   19264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:00.109744    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:00.109744    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:00.137522    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:00.137591    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:02.693722    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:02.718495    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 08:10:02.754864    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.754864    6576 logs.go:284] No container was found matching "kube-apiserver"
	I1205 08:10:02.758547    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 08:10:02.795133    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.795231    6576 logs.go:284] No container was found matching "etcd"
	I1205 08:10:02.798914    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 08:10:02.828115    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.828115    6576 logs.go:284] No container was found matching "coredns"
	I1205 08:10:02.831263    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 08:10:02.864241    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.864241    6576 logs.go:284] No container was found matching "kube-scheduler"
	I1205 08:10:02.867861    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 08:10:02.895555    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.895555    6576 logs.go:284] No container was found matching "kube-proxy"
	I1205 08:10:02.901617    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 08:10:02.931756    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.931756    6576 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 08:10:02.935718    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 08:10:02.964034    6576 logs.go:282] 0 containers: []
	W1205 08:10:02.964034    6576 logs.go:284] No container was found matching "kindnet"
	I1205 08:10:02.968113    6576 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1205 08:10:03.000080    6576 logs.go:282] 0 containers: []
	W1205 08:10:03.000080    6576 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 08:10:03.000080    6576 logs.go:123] Gathering logs for describe nodes ...
	I1205 08:10:03.000080    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 08:10:03.092694    6576 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 08:10:03.082063   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.083203   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.085163   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.086889   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:10:03.089046   19423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 08:10:03.094183    6576 logs.go:123] Gathering logs for Docker ...
	I1205 08:10:03.094183    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 08:10:03.124625    6576 logs.go:123] Gathering logs for container status ...
	I1205 08:10:03.124625    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 08:10:03.178920    6576 logs.go:123] Gathering logs for kubelet ...
	I1205 08:10:03.178920    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 08:10:03.237776    6576 logs.go:123] Gathering logs for dmesg ...
	I1205 08:10:03.237776    6576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 08:10:05.783793    6576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:10:05.810874    6576 out.go:203] 
	W1205 08:10:05.812874    6576 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 08:10:05.812874    6576 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 08:10:05.812874    6576 out.go:285] * Related issues:
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 08:10:05.812874    6576 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 08:10:05.815880    6576 out.go:203] 
	
	
	==> Docker <==
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.859890520Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.859986630Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860002932Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860012733Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860021234Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860055437Z" level=info msg="Docker daemon" commit=4612690 containerd-snapshotter=false storage-driver=overlay2 version=29.0.4
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.860095541Z" level=info msg="Initializing buildkit"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.987212646Z" level=info msg="Completed buildkit initialization"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.997928393Z" level=info msg="Daemon has completed initialization"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998072309Z" level=info msg="API listen on /run/docker.sock"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998148017Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 05 07:58:14 no-preload-104100 dockerd[925]: time="2025-12-05T07:58:14.998246927Z" level=info msg="API listen on [::]:2376"
	Dec 05 07:58:14 no-preload-104100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 05 07:58:15 no-preload-104100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Loaded network plugin cni"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 05 07:58:15 no-preload-104100 cri-dockerd[1220]: time="2025-12-05T07:58:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 05 07:58:15 no-preload-104100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:17:14.740368   21503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:17:14.740845   21503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:17:14.742270   21503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:17:14.742484   21503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:17:14.749753   21503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.912373] CPU: 10 PID: 467231 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f59c4559b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f59c4559af6.
	[  +0.000001] RSP: 002b:00007fff7b401a80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.986945] CPU: 6 PID: 467375 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f68553b7b20
	[  +0.000010] Code: Unable to access opcode bytes at RIP 0x7f68553b7af6.
	[  +0.000001] RSP: 002b:00007ffe7761e510 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 08:17:14 up  3:50,  0 user,  load average: 0.26, 0.82, 2.23
	Linux no-preload-104100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:17:11 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:17:12 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1512.
	Dec 05 08:17:12 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:12 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:12 no-preload-104100 kubelet[21336]: E1205 08:17:12.500309   21336 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:17:12 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:17:12 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:17:13 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1513.
	Dec 05 08:17:13 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:13 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:13 no-preload-104100 kubelet[21365]: E1205 08:17:13.265507   21365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:17:13 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:17:13 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:17:13 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1514.
	Dec 05 08:17:13 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:13 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:14 no-preload-104100 kubelet[21378]: E1205 08:17:14.016373   21378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:17:14 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:17:14 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:17:14 no-preload-104100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1515.
	Dec 05 08:17:14 no-preload-104100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:14 no-preload-104100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:17:14 no-preload-104100 kubelet[21497]: E1205 08:17:14.767354   21497 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:17:14 no-preload-104100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:17:14 no-preload-104100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-104100 -n no-preload-104100: exit status 2 (592.1503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-104100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (229.47s)

                                                
                                    

Test pass (359/427)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.3
4 TestDownloadOnly/v1.28.0/preload-exists 0.04
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.25
9 TestDownloadOnly/v1.28.0/DeleteAll 1.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.88
12 TestDownloadOnly/v1.34.2/json-events 4.88
13 TestDownloadOnly/v1.34.2/preload-exists 0
16 TestDownloadOnly/v1.34.2/kubectl 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.54
18 TestDownloadOnly/v1.34.2/DeleteAll 0.73
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.86
21 TestDownloadOnly/v1.35.0-beta.0/json-events 6.96
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.19
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 1.04
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.69
29 TestDownloadOnlyKic 1.67
30 TestBinaryMirror 2.54
31 TestOffline 128.02
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 391.44
38 TestAddons/serial/Volcano 52.32
40 TestAddons/serial/GCPAuth/Namespaces 0.25
41 TestAddons/serial/GCPAuth/FakeCredentials 9.16
45 TestAddons/parallel/RegistryCreds 1.43
47 TestAddons/parallel/InspektorGadget 12.73
48 TestAddons/parallel/MetricsServer 7.22
50 TestAddons/parallel/CSI 50.56
51 TestAddons/parallel/Headlamp 49.23
52 TestAddons/parallel/CloudSpanner 7.49
53 TestAddons/parallel/LocalPath 86.23
54 TestAddons/parallel/NvidiaDevicePlugin 6.86
55 TestAddons/parallel/Yakd 12.39
56 TestAddons/parallel/AmdGpuDevicePlugin 6.86
57 TestAddons/StoppedEnableDisable 12.89
58 TestCertOptions 59.06
59 TestCertExpiration 278.15
60 TestDockerFlags 60.4
61 TestForceSystemdFlag 54.24
62 TestForceSystemdEnv 50
68 TestErrorSpam/start 2.59
69 TestErrorSpam/status 2
70 TestErrorSpam/pause 2.59
71 TestErrorSpam/unpause 2.61
72 TestErrorSpam/stop 18.96
75 TestFunctional/serial/CopySyncFile 0.04
76 TestFunctional/serial/StartWithProxy 79.54
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 54
79 TestFunctional/serial/KubeContext 0.09
80 TestFunctional/serial/KubectlGetPods 0.25
83 TestFunctional/serial/CacheCmd/cache/add_remote 9.68
84 TestFunctional/serial/CacheCmd/cache/add_local 4.25
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
86 TestFunctional/serial/CacheCmd/cache/list 0.19
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.6
88 TestFunctional/serial/CacheCmd/cache/cache_reload 4.49
89 TestFunctional/serial/CacheCmd/cache/delete 0.39
90 TestFunctional/serial/MinikubeKubectlCmd 0.48
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.33
92 TestFunctional/serial/ExtraConfig 61.61
93 TestFunctional/serial/ComponentHealth 0.14
94 TestFunctional/serial/LogsCmd 1.74
95 TestFunctional/serial/LogsFileCmd 1.82
96 TestFunctional/serial/InvalidService 5.87
98 TestFunctional/parallel/ConfigCmd 1.2
100 TestFunctional/parallel/DryRun 1.52
101 TestFunctional/parallel/InternationalLanguage 0.6
102 TestFunctional/parallel/StatusCmd 1.9
107 TestFunctional/parallel/AddonsCmd 0.42
108 TestFunctional/parallel/PersistentVolumeClaim 38.09
110 TestFunctional/parallel/SSHCmd 1.25
111 TestFunctional/parallel/CpCmd 3.49
112 TestFunctional/parallel/MySQL 56.1
113 TestFunctional/parallel/FileSync 0.57
114 TestFunctional/parallel/CertSync 4.05
118 TestFunctional/parallel/NodeLabels 0.2
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
122 TestFunctional/parallel/License 1.62
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.33
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.4
129 TestFunctional/parallel/ServiceCmd/List 0.82
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.86
131 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
138 TestFunctional/parallel/Version/short 0.22
139 TestFunctional/parallel/Version/components 1.9
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.5
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.56
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.46
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.48
144 TestFunctional/parallel/ImageCommands/ImageBuild 8.08
145 TestFunctional/parallel/ImageCommands/Setup 1.83
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.33
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.35
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.03
151 TestFunctional/parallel/DockerEnv/powershell 5.42
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.83
153 TestFunctional/parallel/ServiceCmd/Format 15.01
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.73
155 TestFunctional/parallel/ProfileCmd/profile_not_create 1
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.95
157 TestFunctional/parallel/ProfileCmd/profile_list 0.94
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.14
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.89
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
161 TestFunctional/parallel/ServiceCmd/URL 15.01
162 TestFunctional/delete_echo-server_images 0.15
163 TestFunctional/delete_my-image_image 0.06
164 TestFunctional/delete_minikube_cached_images 0.06
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.11
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 9.47
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 3.76
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.19
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.19
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.6
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 4.54
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.39
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.23
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.31
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 1.14
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 1.51
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.68
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.43
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 1.11
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 3.38
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.56
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 3.39
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 2.37
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.92
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.8
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.83
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.18
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 1.72
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.46
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.49
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.46
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.48
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 5.33
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.8
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 3.17
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 2.87
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 3.58
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.68
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.95
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.24
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.91
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.34
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.31
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.32
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.15
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.06
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.06
260 TestMultiControlPlane/serial/StartCluster 224.68
261 TestMultiControlPlane/serial/DeployApp 9.68
262 TestMultiControlPlane/serial/PingHostFromPods 2.49
263 TestMultiControlPlane/serial/AddWorkerNode 55.26
264 TestMultiControlPlane/serial/NodeLabels 0.14
265 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.02
266 TestMultiControlPlane/serial/CopyFile 34.11
267 TestMultiControlPlane/serial/StopSecondaryNode 13.43
268 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.59
269 TestMultiControlPlane/serial/RestartSecondaryNode 49.92
270 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2
271 TestMultiControlPlane/serial/RestartClusterKeepsNodes 203.46
272 TestMultiControlPlane/serial/DeleteSecondaryNode 14.36
273 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.49
274 TestMultiControlPlane/serial/StopCluster 35.91
275 TestMultiControlPlane/serial/RestartCluster 122.52
276 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.51
277 TestMultiControlPlane/serial/AddSecondaryNode 80.07
278 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2
281 TestImageBuild/serial/Setup 50.48
282 TestImageBuild/serial/NormalBuild 4.09
283 TestImageBuild/serial/BuildWithBuildArg 2.31
284 TestImageBuild/serial/BuildWithDockerIgnore 1.23
285 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.26
290 TestJSONOutput/start/Command 77.52
291 TestJSONOutput/start/Audit 0
293 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/pause/Command 1.16
297 TestJSONOutput/pause/Audit 0
299 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/unpause/Command 0.89
303 TestJSONOutput/unpause/Audit 0
305 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/stop/Command 12.11
309 TestJSONOutput/stop/Audit 0
311 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
313 TestErrorJSONOutput 0.68
315 TestKicCustomNetwork/create_custom_network 54.46
316 TestKicCustomNetwork/use_default_bridge_network 53.25
317 TestKicExistingNetwork 55.09
318 TestKicCustomSubnet 56.69
319 TestKicStaticIP 53.59
320 TestMainNoArgs 0.16
321 TestMinikubeProfile 100.98
324 TestMountStart/serial/StartWithMountFirst 13.63
325 TestMountStart/serial/VerifyMountFirst 0.56
326 TestMountStart/serial/StartWithMountSecond 13.87
327 TestMountStart/serial/VerifyMountSecond 0.54
328 TestMountStart/serial/DeleteFirst 2.4
329 TestMountStart/serial/VerifyMountPostDelete 0.56
330 TestMountStart/serial/Stop 1.87
331 TestMountStart/serial/RestartStopped 10.86
332 TestMountStart/serial/VerifyMountPostStop 0.56
335 TestMultiNode/serial/FreshStart2Nodes 130.49
336 TestMultiNode/serial/DeployApp2Nodes 7.57
337 TestMultiNode/serial/PingHostFrom2Pods 1.76
338 TestMultiNode/serial/AddNode 54.18
339 TestMultiNode/serial/MultiNodeLabels 0.14
340 TestMultiNode/serial/ProfileList 1.39
341 TestMultiNode/serial/CopyFile 19.42
342 TestMultiNode/serial/StopNode 3.83
343 TestMultiNode/serial/StartAfterStop 13.25
344 TestMultiNode/serial/RestartKeepsNodes 83.57
345 TestMultiNode/serial/DeleteNode 8.16
346 TestMultiNode/serial/StopMultiNode 24.1
347 TestMultiNode/serial/RestartMultiNode 62.86
348 TestMultiNode/serial/ValidateNameConflict 50.82
352 TestPreload 164.56
353 TestScheduledStopWindows 114.38
357 TestInsufficientStorage 28.55
358 TestRunningBinaryUpgrade 220.31
361 TestMissingContainerUpgrade 126.74
363 TestNoKubernetes/serial/StartNoK8sWithVersion 0.29
364 TestStoppedBinaryUpgrade/Setup 0.81
373 TestPause/serial/Start 127.49
374 TestNoKubernetes/serial/StartWithK8s 93.04
375 TestStoppedBinaryUpgrade/Upgrade 410.39
376 TestNoKubernetes/serial/StartWithStopK8s 21.21
377 TestNoKubernetes/serial/Start 15.86
378 TestPause/serial/SecondStartNoReconfiguration 60.05
379 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
380 TestNoKubernetes/serial/VerifyK8sNotRunning 0.74
381 TestNoKubernetes/serial/ProfileList 3.53
382 TestNoKubernetes/serial/Stop 6.7
383 TestNoKubernetes/serial/StartNoArgs 12.04
384 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.57
385 TestPause/serial/Pause 1.04
386 TestPause/serial/VerifyStatus 0.62
387 TestPause/serial/Unpause 0.88
388 TestPause/serial/PauseAgain 1.25
389 TestPause/serial/DeletePaused 3.75
390 TestPause/serial/VerifyDeletedResources 20.48
402 TestStoppedBinaryUpgrade/MinikubeLogs 1.58
404 TestStartStop/group/old-k8s-version/serial/FirstStart 95.42
407 TestStartStop/group/old-k8s-version/serial/DeployApp 9.67
408 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.57
409 TestStartStop/group/old-k8s-version/serial/Stop 12.09
410 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.54
411 TestStartStop/group/old-k8s-version/serial/SecondStart 33.57
412 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 23.01
414 TestStartStop/group/embed-certs/serial/FirstStart 94.7
415 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.99
416 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.53
417 TestStartStop/group/old-k8s-version/serial/Pause 5.87
419 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.43
420 TestStartStop/group/embed-certs/serial/DeployApp 9.52
421 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.52
422 TestStartStop/group/embed-certs/serial/Stop 12.49
423 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.64
424 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.63
425 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
426 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.55
427 TestStartStop/group/embed-certs/serial/SecondStart 49.45
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.54
429 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.71
430 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
431 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.33
432 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.47
433 TestStartStop/group/embed-certs/serial/Pause 5.21
434 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.31
435 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.33
438 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.49
439 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.66
440 TestNetworkPlugins/group/auto/Start 86.23
441 TestNetworkPlugins/group/auto/KubeletFlags 0.58
442 TestNetworkPlugins/group/auto/NetCatPod 15.51
443 TestNetworkPlugins/group/auto/DNS 0.23
444 TestNetworkPlugins/group/auto/Localhost 0.21
445 TestNetworkPlugins/group/auto/HairPin 0.2
446 TestNetworkPlugins/group/kindnet/Start 87.66
447 TestNetworkPlugins/group/calico/Start 96.71
450 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
451 TestNetworkPlugins/group/kindnet/KubeletFlags 0.68
452 TestNetworkPlugins/group/kindnet/NetCatPod 17.68
453 TestNetworkPlugins/group/kindnet/DNS 0.25
454 TestNetworkPlugins/group/kindnet/Localhost 0.22
455 TestNetworkPlugins/group/kindnet/HairPin 0.23
456 TestNetworkPlugins/group/calico/ControllerPod 5.02
457 TestNetworkPlugins/group/calico/KubeletFlags 0.58
458 TestNetworkPlugins/group/calico/NetCatPod 15.55
459 TestNetworkPlugins/group/calico/DNS 0.27
460 TestNetworkPlugins/group/calico/Localhost 0.23
461 TestNetworkPlugins/group/calico/HairPin 0.23
462 TestNetworkPlugins/group/custom-flannel/Start 73.16
463 TestStartStop/group/no-preload/serial/Stop 5.11
464 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.59
466 TestNetworkPlugins/group/false/Start 75.8
467 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.58
468 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.52
469 TestNetworkPlugins/group/custom-flannel/DNS 0.24
470 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
471 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
472 TestNetworkPlugins/group/false/KubeletFlags 0.55
473 TestNetworkPlugins/group/false/NetCatPod 14.51
474 TestNetworkPlugins/group/false/DNS 0.29
475 TestNetworkPlugins/group/false/Localhost 0.2
476 TestNetworkPlugins/group/false/HairPin 0.21
477 TestNetworkPlugins/group/enable-default-cni/Start 95.99
478 TestNetworkPlugins/group/flannel/Start 66.97
479 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.58
480 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.57
481 TestNetworkPlugins/group/flannel/ControllerPod 6.01
482 TestNetworkPlugins/group/flannel/KubeletFlags 0.54
483 TestNetworkPlugins/group/flannel/NetCatPod 15.45
484 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
485 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
486 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
487 TestStartStop/group/newest-cni/serial/DeployApp 0
489 TestNetworkPlugins/group/flannel/DNS 0.24
490 TestNetworkPlugins/group/flannel/Localhost 0.24
491 TestNetworkPlugins/group/flannel/HairPin 0.2
492 TestNetworkPlugins/group/bridge/Start 95.44
493 TestNetworkPlugins/group/kubenet/Start 87.1
494 TestStartStop/group/newest-cni/serial/Stop 1.91
495 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.54
497 TestNetworkPlugins/group/bridge/KubeletFlags 0.6
498 TestNetworkPlugins/group/bridge/NetCatPod 15.63
499 TestNetworkPlugins/group/kubenet/KubeletFlags 0.6
500 TestNetworkPlugins/group/kubenet/NetCatPod 14.5
501 TestNetworkPlugins/group/bridge/DNS 0.25
502 TestNetworkPlugins/group/bridge/Localhost 0.2
503 TestNetworkPlugins/group/bridge/HairPin 0.22
504 TestNetworkPlugins/group/kubenet/DNS 0.23
505 TestNetworkPlugins/group/kubenet/Localhost 0.21
506 TestNetworkPlugins/group/kubenet/HairPin 0.24
508 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
509 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.47
x
+
TestDownloadOnly/v1.28.0/json-events (7.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-367400 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-367400 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (7.3001343s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1205 06:05:27.834524    8036 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1205 06:05:27.876644    8036 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-367400
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-367400: exit status 85 (242.8201ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-367400 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-367400 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:20
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:20.602813    9168 out.go:360] Setting OutFile to fd 724 ...
	I1205 06:05:20.644481    9168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:20.644481    9168 out.go:374] Setting ErrFile to fd 728...
	I1205 06:05:20.644481    9168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1205 06:05:20.655906    9168 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1205 06:05:20.662125    9168 out.go:368] Setting JSON to true
	I1205 06:05:20.664831    9168 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5978,"bootTime":1764908742,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:05:20.665351    9168 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:05:20.670078    9168 out.go:99] [download-only-367400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:05:20.671093    9168 notify.go:221] Checking for updates...
	W1205 06:05:20.671093    9168 preload.go:354] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1205 06:05:20.672161    9168 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:05:20.675289    9168 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:05:20.677290    9168 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:20.679885    9168 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1205 06:05:20.683721    9168 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:05:20.684552    9168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:05:20.906735    9168 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:05:20.910693    9168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:21.629962    9168 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-05 06:05:21.608241258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:05:21.635558    9168 out.go:99] Using the docker driver based on user configuration
	I1205 06:05:21.635558    9168 start.go:309] selected driver: docker
	I1205 06:05:21.635558    9168 start.go:927] validating driver "docker" against <nil>
	I1205 06:05:21.643881    9168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:21.890173    9168 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-05 06:05:21.873682889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:05:21.890173    9168 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:05:21.941484    9168 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1205 06:05:21.942579    9168 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:05:21.945125    9168 out.go:171] Using Docker Desktop driver with root privileges
	I1205 06:05:21.947006    9168 cni.go:84] Creating CNI manager for ""
	I1205 06:05:21.947006    9168 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:05:21.947006    9168 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 06:05:21.947006    9168 start.go:353] cluster config:
	{Name:download-only-367400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-367400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:05:21.949389    9168 out.go:99] Starting "download-only-367400" primary control-plane node in "download-only-367400" cluster
	I1205 06:05:21.949389    9168 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:05:21.951407    9168 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:05:21.951407    9168 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1205 06:05:21.951407    9168 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:05:21.987917    9168 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1205 06:05:21.987917    9168 cache.go:65] Caching tarball of preloaded images
	I1205 06:05:21.987917    9168 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1205 06:05:21.991010    9168 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1205 06:05:21.991010    9168 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1205 06:05:22.006755    9168 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:05:22.006755    9168 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1764169655-21974@sha256_5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar
	I1205 06:05:22.006755    9168 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1764169655-21974@sha256_5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar
	I1205 06:05:22.006755    9168 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1205 06:05:22.009008    9168 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:05:22.066499    9168 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1205 06:05:22.067504    9168 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-367400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-367400"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2040648s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-367400
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-612100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-612100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker: (4.8839381s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1205 06:05:35.092498    8036 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1205 06:05:35.092498    8036 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
--- PASS: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-612100
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-612100: exit status 85 (534.1404ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-367400 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-367400 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-367400                                                                                                                           │ download-only-367400 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-612100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker │ download-only-612100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:30.280352    1872 out.go:360] Setting OutFile to fd 768 ...
	I1205 06:05:30.323384    1872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:30.323384    1872 out.go:374] Setting ErrFile to fd 764...
	I1205 06:05:30.323384    1872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:30.337493    1872 out.go:368] Setting JSON to true
	I1205 06:05:30.340932    1872 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5988,"bootTime":1764908742,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:05:30.341063    1872 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:05:30.356261    1872 out.go:99] [download-only-612100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:05:30.356825    1872 notify.go:221] Checking for updates...
	I1205 06:05:30.358832    1872 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:05:30.361572    1872 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:05:30.363808    1872 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:30.366707    1872 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1205 06:05:30.370779    1872 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:05:30.370779    1872 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:05:30.492790    1872 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:05:30.496340    1872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:30.730459    1872 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-05 06:05:30.711579427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:05:30.733487    1872 out.go:99] Using the docker driver based on user configuration
	I1205 06:05:30.733487    1872 start.go:309] selected driver: docker
	I1205 06:05:30.733487    1872 start.go:927] validating driver "docker" against <nil>
	I1205 06:05:30.740673    1872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:30.963619    1872 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-05 06:05:30.944728587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:05:30.963619    1872 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:05:30.999145    1872 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1205 06:05:30.999845    1872 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:05:31.025429    1872 out.go:171] Using Docker Desktop driver with root privileges
	
	
	* The control-plane node download-only-612100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-612100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-612100
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (6.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-912200 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-912200 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker: (6.9602184s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (6.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-912200
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-912200: exit status 85 (188.3629ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                           │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-367400 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker        │ download-only-367400 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-367400                                                                                                                                  │ download-only-367400 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-612100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker        │ download-only-612100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-612100                                                                                                                                  │ download-only-612100 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-912200 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker │ download-only-912200 │ minikube4\jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:37.285605    7636 out.go:360] Setting OutFile to fd 740 ...
	I1205 06:05:37.328164    7636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:37.328164    7636 out.go:374] Setting ErrFile to fd 736...
	I1205 06:05:37.328164    7636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:37.342062    7636 out.go:368] Setting JSON to true
	I1205 06:05:37.344923    7636 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5995,"bootTime":1764908742,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:05:37.344923    7636 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:05:37.352091    7636 out.go:99] [download-only-912200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:05:37.353093    7636 notify.go:221] Checking for updates...
	I1205 06:05:37.354995    7636 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:05:37.358034    7636 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:05:37.359301    7636 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:37.362467    7636 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1205 06:05:37.367345    7636 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:05:37.368229    7636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:05:37.484198    7636 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:05:37.487083    7636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:37.713772    7636 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-05 06:05:37.695796786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:05:37.717328    7636 out.go:99] Using the docker driver based on user configuration
	I1205 06:05:37.717403    7636 start.go:309] selected driver: docker
	I1205 06:05:37.717434    7636 start.go:927] validating driver "docker" against <nil>
	I1205 06:05:37.723518    7636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:37.971171    7636 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-05 06:05:37.954964547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:05:37.972173    7636 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:05:38.007689    7636 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1205 06:05:38.008376    7636 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:05:38.289293    7636 out.go:171] Using Docker Desktop driver with root privileges
	I1205 06:05:38.291566    7636 cni.go:84] Creating CNI manager for ""
	I1205 06:05:38.292204    7636 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 06:05:38.292204    7636 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 06:05:38.292338    7636 start.go:353] cluster config:
	{Name:download-only-912200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-912200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:05:38.294915    7636 out.go:99] Starting "download-only-912200" primary control-plane node in "download-only-912200" cluster
	I1205 06:05:38.294999    7636 cache.go:134] Beginning downloading kic base image for docker with docker
	I1205 06:05:38.296863    7636 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:05:38.296863    7636 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:05:38.296863    7636 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1205 06:05:38.337666    7636 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:05:38.350005    7636 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:05:38.350387    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1764169655-21974@sha256_5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar
	I1205 06:05:38.350637    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1764169655-21974@sha256_5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b.tar
	I1205 06:05:38.350637    7636 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1205 06:05:38.350776    7636 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1205 06:05:38.350830    7636 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1205 06:05:38.350920    7636 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	W1205 06:05:38.655719    7636 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1205 06:05:38.656158    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1205 06:05:38.656232    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:05:38.656232    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1205 06:05:38.656232    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:05:38.656232    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:05:38.656232    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:05:38.656232    7636 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-912200\config.json ...
	I1205 06:05:38.656232    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:05:38.656442    7636 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:05:38.656442    7636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-912200\config.json: {Name:mk6ff219dbb353c339815b063e09c6ac4b56340a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:38.657988    7636 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1205 06:05:38.661563    7636 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl
	I1205 06:05:38.661563    7636 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm
	I1205 06:05:38.661563    7636 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet
	I1205 06:05:41.398375    7636 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.35.0-beta.0/kubectl.exe
	I1205 06:05:41.535123    7636 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.536999    7636 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:05:41.550625    7636 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.552034    7636 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:05:41.559774    7636 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.560876    7636 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:05:41.560876    7636 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.561230    7636 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:05:41.561436    7636 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.562100    7636 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 06:05:41.564027    7636 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.564027    7636 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:05:41.564027    7636 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:05:41.564027    7636 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:05:41.566027    7636 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.566027    7636 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:05:41.571016    7636 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:05:41.571016    7636 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 06:05:41.571016    7636 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:05:41.575029    7636 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:05:41.576025    7636 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:05:41.593798    7636 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:41.595460    7636 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:05:41.609526    7636 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W1205 06:05:41.674250    7636 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:05:41.721206    7636 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:05:41.772202    7636 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:05:41.820340    7636 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:05:41.869014    7636 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:05:41.918575    7636 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:05:41.972983    7636 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1205 06:05:42.027430    7636 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1205 06:05:42.148633    7636 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1205 06:05:42.151484    7636 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1205 06:05:42.159157    7636 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1205 06:05:42.174092    7636 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1205 06:05:42.182713    7636 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1205 06:05:42.232724    7636 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1205 06:05:42.275399    7636 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	
	
	* The control-plane node download-only-912200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-912200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (1.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0410122s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (1.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-912200
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.69s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-726200 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-726200 --alsologtostderr --driver=docker: (1.1546178s)
helpers_test.go:175: Cleaning up "download-docker-726200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-726200
--- PASS: TestDownloadOnlyKic (1.67s)

                                                
                                    
x
+
TestBinaryMirror (2.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 06:05:49.671046    8036 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-653900 --alsologtostderr --binary-mirror http://127.0.0.1:54115 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-653900 --alsologtostderr --binary-mirror http://127.0.0.1:54115 --driver=docker: (1.7742729s)
helpers_test.go:175: Cleaning up "binary-mirror-653900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-653900
--- PASS: TestBinaryMirror (2.54s)

                                                
                                    
x
+
TestOffline (128.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-852300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-852300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (2m4.2754717s)
helpers_test.go:175: Cleaning up "offline-docker-852300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-852300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-852300: (3.7486055s)
--- PASS: TestOffline (128.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-925500
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-925500: exit status 85 (212.8029ms)

                                                
                                                
-- stdout --
	* Profile "addons-925500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-925500
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-925500: exit status 85 (188.4633ms)

                                                
                                                
-- stdout --
	* Profile "addons-925500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (391.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-925500 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-925500 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m31.4423216s)
--- PASS: TestAddons/Setup (391.44s)

                                                
                                    
x
+
TestAddons/serial/Volcano (52.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 18.1339ms
addons_test.go:868: volcano-scheduler stabilized in 20.0666ms
addons_test.go:876: volcano-admission stabilized in 20.0666ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-j9s27" [fd4ed13a-332c-4e74-847a-86ca32879659] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0060698s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-dghj5" [61a25cbe-e08d-4b9c-8ba3-95475ea24b81] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0051724s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-crn2q" [560f1a39-70c6-4ae3-b228-0106fd07c1b8] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0070347s
addons_test.go:903: (dbg) Run:  kubectl --context addons-925500 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-925500 create -f testdata\vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-925500 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [bdf81e44-a4e7-454e-9be2-807a29bf4868] Pending
helpers_test.go:352: "test-job-nginx-0" [bdf81e44-a4e7-454e-9be2-807a29bf4868] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [bdf81e44-a4e7-454e-9be2-807a29bf4868] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 21.008054s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable volcano --alsologtostderr -v=1: (12.4233809s)
--- PASS: TestAddons/serial/Volcano (52.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-925500 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-925500 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-925500 create -f testdata\busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-925500 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a4b69ac9-816e-4b70-9030-7ff753536b37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a4b69ac9-816e-4b70-9030-7ff753536b37] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.0065416s
addons_test.go:694: (dbg) Run:  kubectl --context addons-925500 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-925500 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-925500 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-925500 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.16s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 10.3367ms
addons_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-925500
addons_test.go:332: (dbg) Run:  kubectl --context addons-925500 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-ddncf" [4618c484-1bf7-4ed0-bc52-a4b6d71ea9ef] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0056943s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable inspektor-gadget --alsologtostderr -v=1: (6.7213556s)
--- PASS: TestAddons/parallel/InspektorGadget (12.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.0118ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qqw5g" [e7cd217e-8030-4634-821b-9151c005fb1b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0067424s
addons_test.go:463: (dbg) Run:  kubectl --context addons-925500 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable metrics-server --alsologtostderr -v=1: (1.0679608s)
--- PASS: TestAddons/parallel/MetricsServer (7.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 06:14:03.626052    8036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 06:14:03.633715    8036 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 06:14:03.633757    8036 kapi.go:107] duration metric: took 7.7849ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.8053ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-925500 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-925500 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bb04343f-e8e6-48af-a480-a2cdc84b3389] Pending
helpers_test.go:352: "task-pv-pod" [bb04343f-e8e6-48af-a480-a2cdc84b3389] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [bb04343f-e8e6-48af-a480-a2cdc84b3389] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.021865s
addons_test.go:572: (dbg) Run:  kubectl --context addons-925500 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-925500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-925500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-925500 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-925500 delete pod task-pv-pod: (1.997381s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-925500 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-925500 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-925500 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f406cd8e-3db2-414b-b4e6-869b7964cfa0] Pending
helpers_test.go:352: "task-pv-pod-restore" [f406cd8e-3db2-414b-b4e6-869b7964cfa0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f406cd8e-3db2-414b-b4e6-869b7964cfa0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0059768s
addons_test.go:614: (dbg) Run:  kubectl --context addons-925500 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-925500 delete pod task-pv-pod-restore: (1.4593814s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-925500 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-925500 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable volumesnapshots --alsologtostderr -v=1: (1.3920554s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.3350047s)
--- PASS: TestAddons/parallel/CSI (50.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (49.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-925500 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-925500 --alsologtostderr -v=1: (1.8718768s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-4fd49" [670a9653-1f8b-47cf-85df-bc1ca77c6673] Pending
helpers_test.go:352: "headlamp-dfcdc64b-4fd49" [670a9653-1f8b-47cf-85df-bc1ca77c6673] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-4fd49" [670a9653-1f8b-47cf-85df-bc1ca77c6673] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-4fd49" [670a9653-1f8b-47cf-85df-bc1ca77c6673] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 40.0188451s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable headlamp --alsologtostderr -v=1: (7.3363118s)
--- PASS: TestAddons/parallel/Headlamp (49.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-qnd7w" [73ae12ea-3eb2-49c8-b673-ba36d0cec571] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.3440101s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable cloud-spanner --alsologtostderr -v=1: (1.1035029s)
--- PASS: TestAddons/parallel/CloudSpanner (7.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (86.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-925500 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-925500 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2b0a9a0e-d9be-4de4-a5e1-1bdf26338c9f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2b0a9a0e-d9be-4de4-a5e1-1bdf26338c9f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2b0a9a0e-d9be-4de4-a5e1-1bdf26338c9f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 34.0143329s
addons_test.go:967: (dbg) Run:  kubectl --context addons-925500 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 ssh "cat /opt/local-path-provisioner/pvc-9ee813af-9f2d-4c19-93ce-ee00ae08fbac_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-925500 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-925500 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.4463988s)
--- PASS: TestAddons/parallel/LocalPath (86.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6tvqq" [508969c3-2a9c-4ddc-8de3-3ed85631c0af] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0058818s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-67ln7" [4de1412e-28b2-463c-9175-711207571cad] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0063786s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable yakd --alsologtostderr -v=1: (6.3851471s)
--- PASS: TestAddons/parallel/Yakd (12.39s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-z7m4b" [9aeb912b-92b7-407f-abf8-a0ca5a0229c1] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0066551s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-925500
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-925500: (12.0809929s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-925500
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-925500
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-925500
--- PASS: TestAddons/StoppedEnableDisable (12.89s)

                                                
                                    
x
+
TestCertOptions (59.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-180400 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-180400 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (45.4958769s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-180400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1205 07:46:58.957793    8036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-180400
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-180400 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-180400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-180400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-180400: (12.295675s)
--- PASS: TestCertOptions (59.06s)

                                                
                                    
x
+
TestCertExpiration (278.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-463600 --memory=3072 --cert-expiration=3m --driver=docker
E1205 07:45:22.669375    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-463600 --memory=3072 --cert-expiration=3m --driver=docker: (48.0451588s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-463600 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-463600 --memory=3072 --cert-expiration=8760h --driver=docker: (45.5741217s)
helpers_test.go:175: Cleaning up "cert-expiration-463600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-463600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-463600: (4.5313587s)
--- PASS: TestCertExpiration (278.15s)

                                                
                                    
x
+
TestDockerFlags (60.4s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-267700 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-267700 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (55.3876563s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-267700 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-267700 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-267700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-267700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-267700: (3.8279119s)
--- PASS: TestDockerFlags (60.40s)

                                                
                                    
x
+
TestForceSystemdFlag (54.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-684600 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-684600 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: (49.8970994s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-684600 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-684600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-684600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-684600: (3.7165173s)
--- PASS: TestForceSystemdFlag (54.24s)

                                                
                                    
x
+
TestForceSystemdEnv (50s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-032100 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-032100 --memory=3072 --alsologtostderr -v=5 --driver=docker: (45.4399136s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-032100 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-032100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-032100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-032100: (3.9283874s)
--- PASS: TestForceSystemdEnv (50.00s)

                                                
                                    
x
+
TestErrorSpam/start (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 start --dry-run
--- PASS: TestErrorSpam/start (2.59s)

                                                
                                    
x
+
TestErrorSpam/status (2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 status
--- PASS: TestErrorSpam/status (2.00s)

                                                
                                    
x
+
TestErrorSpam/pause (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 pause: (1.1678759s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 pause
--- PASS: TestErrorSpam/pause (2.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 unpause
--- PASS: TestErrorSpam/unpause (2.61s)

                                                
                                    
x
+
TestErrorSpam/stop (18.96s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 stop: (11.9266156s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 stop: (3.3437357s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-472400 stop: (3.6892859s)
--- PASS: TestErrorSpam/stop (18.96s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-088800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1205 06:17:23.883200    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:23.890675    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:23.902118    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:23.924089    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:23.965725    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:24.047549    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:24.209551    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:24.532084    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:25.174276    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:26.456318    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:29.018260    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:34.140495    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:17:44.382450    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-088800 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m19.5373635s)
--- PASS: TestFunctional/serial/StartWithProxy (79.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 06:17:56.150131    8036 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-088800 --alsologtostderr -v=8
E1205 06:18:04.865028    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:45.829134    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-088800 --alsologtostderr -v=8: (54.002568s)
functional_test.go:678: soft start took 54.0037056s for "functional-088800" cluster.
I1205 06:18:50.153439    8036 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (54.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-088800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 cache add registry.k8s.io/pause:3.1: (3.4240953s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 cache add registry.k8s.io/pause:3.3: (3.1101441s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 cache add registry.k8s.io/pause:latest: (3.1425102s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-088800 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3575420083\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-088800 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3575420083\001: (1.3662791s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cache add minikube-local-cache-test:functional-088800
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 cache add minikube-local-cache-test:functional-088800: (2.6253143s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cache delete minikube-local-cache-test:functional-088800
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-088800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (577.3167ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 cache reload: (2.7713908s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 kubectl -- --context functional-088800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-088800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.33s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (61.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-088800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:20:07.752841    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-088800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m1.610547s)
functional_test.go:776: restart took 1m1.610547s for "functional-088800" cluster.
I1205 06:20:12.708732    8036 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (61.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-088800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 logs: (1.7414065s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2608650809\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2608650809\001\logs.txt: (1.8089589s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-088800 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-088800
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-088800: exit status 115 (1.0423797s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32442 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-088800 delete -f testdata\invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-088800 delete -f testdata\invalidsvc.yaml: (1.4450516s)
--- PASS: TestFunctional/serial/InvalidService (5.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 config get cpus: exit status 14 (178.2245ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 config get cpus: exit status 14 (156.7162ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (654.3667ms)

                                                
                                                
-- stdout --
	* [functional-088800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:20:54.629562    5728 out.go:360] Setting OutFile to fd 1424 ...
	I1205 06:20:54.678574    5728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:54.678574    5728 out.go:374] Setting ErrFile to fd 1496...
	I1205 06:20:54.678574    5728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:54.698462    5728 out.go:368] Setting JSON to false
	I1205 06:20:54.702476    5728 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6912,"bootTime":1764908742,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:20:54.702476    5728 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:20:54.705466    5728 out.go:179] * [functional-088800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:20:54.708472    5728 notify.go:221] Checking for updates...
	I1205 06:20:54.710467    5728 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:20:54.713465    5728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:20:54.715462    5728 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:20:54.718463    5728 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:20:54.720463    5728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:20:54.723473    5728 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 06:20:54.724475    5728 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:20:54.853462    5728 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:20:54.856464    5728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:20:55.105214    5728 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-05 06:20:55.075121431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:20:55.111485    5728 out.go:179] * Using the docker driver based on existing profile
	I1205 06:20:55.114868    5728 start.go:309] selected driver: docker
	I1205 06:20:55.114926    5728 start.go:927] validating driver "docker" against &{Name:functional-088800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-088800 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:20:55.115068    5728 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:20:55.165735    5728 out.go:203] 
	W1205 06:20:55.167739    5728 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:20:55.169740    5728 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-088800 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-088800 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (596.6232ms)

                                                
                                                
-- stdout --
	* [functional-088800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:20:56.157065   11788 out.go:360] Setting OutFile to fd 1404 ...
	I1205 06:20:56.205599   11788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:56.205599   11788 out.go:374] Setting ErrFile to fd 1412...
	I1205 06:20:56.205599   11788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:56.218505   11788 out.go:368] Setting JSON to false
	I1205 06:20:56.221499   11788 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6914,"bootTime":1764908742,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:20:56.221499   11788 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:20:56.228020   11788 out.go:179] * [functional-088800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:20:56.232877   11788 notify.go:221] Checking for updates...
	I1205 06:20:56.235247   11788 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:20:56.237191   11788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:20:56.239066   11788 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:20:56.241516   11788 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:20:56.243130   11788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:20:56.246233   11788 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 06:20:56.247372   11788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:20:56.361890   11788 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:20:56.365297   11788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:20:56.592014   11788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-05 06:20:56.572717797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:20:56.596025   11788 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 06:20:56.598013   11788 start.go:309] selected driver: docker
	I1205 06:20:56.598013   11788 start.go:927] validating driver "docker" against &{Name:functional-088800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-088800 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:20:56.598013   11788 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:20:56.634820   11788 out.go:203] 
	W1205 06:20:56.637823   11788 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:20:56.639820   11788 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [0d8b65b1-c9a1-40cc-ab73-5ec02807d9cf] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005714s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-088800 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-088800 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-088800 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-088800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c8849076-5334-4f50-a45f-4760caff848f] Pending
helpers_test.go:352: "sp-pod" [c8849076-5334-4f50-a45f-4760caff848f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c8849076-5334-4f50-a45f-4760caff848f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.0065052s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-088800 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-088800 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-088800 delete -f testdata/storage-provisioner/pod.yaml: (1.3624528s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-088800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [035f6a30-5a55-4989-a7bc-1013afb986dc] Pending
helpers_test.go:352: "sp-pod" [035f6a30-5a55-4989-a7bc-1013afb986dc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [035f6a30-5a55-4989-a7bc-1013afb986dc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0232802s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-088800 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh -n functional-088800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cp functional-088800:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3663955498\001\cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh -n functional-088800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh -n functional-088800 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (56.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-088800 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-b4q48" [b42dd2b0-c45d-4c84-ac5e-4ddbebde230d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-b4q48" [b42dd2b0-c45d-4c84-ac5e-4ddbebde230d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 45.0069549s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;": exit status 1 (210.7008ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:21:40.496683    8036 retry.go:31] will retry after 1.099919573s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;": exit status 1 (194.2143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:21:41.795010    8036 retry.go:31] will retry after 2.100010169s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;": exit status 1 (198.2552ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:21:44.097688    8036 retry.go:31] will retry after 2.429356563s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;": exit status 1 (195.4065ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:21:46.727544    8036 retry.go:31] will retry after 4.020634184s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088800 exec mysql-5bb876957f-b4q48 -- mysql -ppassword -e "show databases;"
E1205 06:22:23.887338    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:22:51.596934    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (56.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8036/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo cat /etc/test/nested/copy/8036/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8036.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo cat /etc/ssl/certs/8036.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8036.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo cat /usr/share/ca-certificates/8036.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/80362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo cat /etc/ssl/certs/80362.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/80362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo cat /usr/share/ca-certificates/80362.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-088800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 ssh "sudo systemctl is-active crio": exit status 1 (646.4188ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.6046658s)
--- PASS: TestFunctional/parallel/License (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-088800 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-088800 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-jq8z4" [06c7187e-506f-49b5-b237-7e6146ec61df] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-jq8z4" [06c7187e-506f-49b5-b237-7e6146ec61df] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.0195769s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-088800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-088800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-088800 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-088800 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 11956: OpenProcess: The parameter is incorrect.
helpers_test.go:519: unable to terminate pid 14000: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-088800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-088800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9b9995da-1dfe-497a-86e1-8b6850a78b32] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9b9995da-1dfe-497a-86e1-8b6850a78b32] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0054998s
I1205 06:20:37.616749    8036 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 service list -o json
functional_test.go:1504: Took "862.5868ms" to run "out/minikube-windows-amd64.exe -p functional-088800 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 service --namespace=default --https --url hello-node: exit status 1 (15.0092296s)

                                                
                                                
-- stdout --
	https://127.0.0.1:55217

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:55217
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-088800 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-088800 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 10072: OpenProcess: The parameter is incorrect.
helpers_test.go:525: unable to kill pid 788: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 version -o=json --components: (1.9014509s)
--- PASS: TestFunctional/parallel/Version/components (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-088800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-088800
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-088800
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-088800 image ls --format short --alsologtostderr:
I1205 06:21:04.809501    3440 out.go:360] Setting OutFile to fd 1472 ...
I1205 06:21:04.856492    3440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:04.856492    3440 out.go:374] Setting ErrFile to fd 1352...
I1205 06:21:04.856492    3440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:04.870318    3440 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:04.870318    3440 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:04.881857    3440 cli_runner.go:164] Run: docker container inspect functional-088800 --format={{.State.Status}}
I1205 06:21:04.943594    3440 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:04.946590    3440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-088800
I1205 06:21:04.997609    3440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54969 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-088800\id_rsa Username:docker}
I1205 06:21:05.140104    3440 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-088800 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/minikube-local-cache-test │ functional-088800 │ d2df05676814b │ 30B    │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-088800 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/nginx                     │ latest            │ 60adc2e137e75 │ 152MB  │
│ docker.io/library/nginx                     │ alpine            │ d4918ca78576a │ 52.8MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ localhost/my-image                          │ functional-088800 │ 28b9f250eef5c │ 1.24MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-088800 image ls --format table --alsologtostderr:
I1205 06:21:14.320487    5332 out.go:360] Setting OutFile to fd 1536 ...
I1205 06:21:14.361490    5332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:14.361490    5332 out.go:374] Setting ErrFile to fd 1620...
I1205 06:21:14.361490    5332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:14.374489    5332 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:14.374489    5332 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:14.381481    5332 cli_runner.go:164] Run: docker container inspect functional-088800 --format={{.State.Status}}
I1205 06:21:14.438484    5332 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:14.441482    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-088800
I1205 06:21:14.495603    5332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54969 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-088800\id_rsa Username:docker}
I1205 06:21:14.676692    5332 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-088800 image ls --format json --alsologtostderr:
[{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed4
3e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-088800","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"d2df05676814b0460eb5bc00e9587cffa1d06f480e1a9cac0b09c806935c238a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-088800"],"size":"30"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","rep
oDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"152000000"},{"id":"28b9f250eef5c9b2126e544ae963b1604f67935032cd29c7a37a41dbbb16a966","repoDigests":[],"repoTags":["localhost/my-image:functional-088800"],"size":"1240000"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-088800 image ls --format json --alsologtostderr:
I1205 06:21:13.856637    2116 out.go:360] Setting OutFile to fd 1540 ...
I1205 06:21:13.904645    2116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:13.904645    2116 out.go:374] Setting ErrFile to fd 928...
I1205 06:21:13.904645    2116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:13.915644    2116 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:13.915644    2116 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:13.925655    2116 cli_runner.go:164] Run: docker container inspect functional-088800 --format={{.State.Status}}
I1205 06:21:13.981663    2116 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:13.984642    2116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-088800
I1205 06:21:14.036654    2116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54969 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-088800\id_rsa Username:docker}
I1205 06:21:14.175926    2116 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-088800 image ls --format yaml --alsologtostderr:
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "152000000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-088800
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d2df05676814b0460eb5bc00e9587cffa1d06f480e1a9cac0b09c806935c238a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-088800
size: "30"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-088800 image ls --format yaml --alsologtostderr:
I1205 06:21:05.304921    7664 out.go:360] Setting OutFile to fd 1832 ...
I1205 06:21:05.347981    7664 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:05.348047    7664 out.go:374] Setting ErrFile to fd 1912...
I1205 06:21:05.348047    7664 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:05.359614    7664 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:05.359614    7664 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:05.367275    7664 cli_runner.go:164] Run: docker container inspect functional-088800 --format={{.State.Status}}
I1205 06:21:05.431094    7664 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:05.434973    7664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-088800
I1205 06:21:05.492525    7664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54969 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-088800\id_rsa Username:docker}
I1205 06:21:05.627040    7664 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 ssh pgrep buildkitd: exit status 1 (537.7432ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr: (7.0939245s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-088800 image build -t localhost/my-image:functional-088800 testdata\build --alsologtostderr:
I1205 06:21:06.318370    8024 out.go:360] Setting OutFile to fd 1448 ...
I1205 06:21:06.380447    8024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:06.380447    8024 out.go:374] Setting ErrFile to fd 428...
I1205 06:21:06.380522    8024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:06.394337    8024 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:06.415665    8024 config.go:182] Loaded profile config "functional-088800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1205 06:21:06.424658    8024 cli_runner.go:164] Run: docker container inspect functional-088800 --format={{.State.Status}}
I1205 06:21:06.490962    8024 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:06.494975    8024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-088800
I1205 06:21:06.543965    8024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54969 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-088800\id_rsa Username:docker}
I1205 06:21:06.733200    8024 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1360849081.tar
I1205 06:21:06.737979    8024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:21:06.758036    8024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1360849081.tar
I1205 06:21:06.766647    8024 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1360849081.tar: stat -c "%s %y" /var/lib/minikube/build/build.1360849081.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1360849081.tar': No such file or directory
I1205 06:21:06.766846    8024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1360849081.tar --> /var/lib/minikube/build/build.1360849081.tar (3072 bytes)
I1205 06:21:06.807250    8024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1360849081
I1205 06:21:06.826719    8024 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1360849081 -xf /var/lib/minikube/build/build.1360849081.tar
I1205 06:21:06.871173    8024 docker.go:361] Building image: /var/lib/minikube/build/build.1360849081
I1205 06:21:06.875203    8024 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-088800 /var/lib/minikube/build/build.1360849081
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 3.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:28b9f250eef5c9b2126e544ae963b1604f67935032cd29c7a37a41dbbb16a966
#8 writing image sha256:28b9f250eef5c9b2126e544ae963b1604f67935032cd29c7a37a41dbbb16a966 0.0s done
#8 naming to localhost/my-image:functional-088800 0.0s done
#8 DONE 0.2s
I1205 06:21:13.268486    8024 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-088800 /var/lib/minikube/build/build.1360849081: (6.3931655s)
I1205 06:21:13.272978    8024 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1360849081
I1205 06:21:13.293299    8024 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1360849081.tar
I1205 06:21:13.309319    8024 build_images.go:218] Built localhost/my-image:functional-088800 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1360849081.tar
I1205 06:21:13.309319    8024 build_images.go:134] succeeded building to: functional-088800
I1205 06:21:13.309319    8024 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7414041s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-088800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image load --daemon kicbase/echo-server:functional-088800 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 image load --daemon kicbase/echo-server:functional-088800 --alsologtostderr: (2.8530845s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image load --daemon kicbase/echo-server:functional-088800 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 image load --daemon kicbase/echo-server:functional-088800 --alsologtostderr: (2.5213158s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-088800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-088800"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-088800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-088800": (3.2245521s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-088800 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-088800 docker-env | Invoke-Expression ; docker images": (2.1872263s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-088800
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image load --daemon kicbase/echo-server:functional-088800 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-088800 image load --daemon kicbase/echo-server:functional-088800 --alsologtostderr: (2.5789441s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 service hello-node --url --format={{.IP}}: exit status 1 (15.0086944s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image save kicbase/echo-server:functional-088800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image rm kicbase/echo-server:functional-088800 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "773.265ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "170.5866ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "724.1015ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "166.6486ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-088800
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 image save --daemon kicbase/echo-server:functional-088800 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-088800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-088800 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-088800 service hello-node --url: exit status 1 (15.010378s)

                                                
                                                
-- stdout --
	http://127.0.0.1:55337

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:55337
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-088800
--- PASS: TestFunctional/delete_echo-server_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-088800
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-088800
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\8036\hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 cache add registry.k8s.io/pause:3.1: (3.2943392s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 cache add registry.k8s.io/pause:3.3: (3.0515037s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 cache add registry.k8s.io/pause:latest: (3.1269156s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-247800 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2748844621\001
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cache add minikube-local-cache-test:functional-247800
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 cache add minikube-local-cache-test:functional-247800: (2.5516915s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cache delete minikube-local-cache-test:functional-247800
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-247800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (613.8692ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 cache reload: (2.7547101s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs: (1.2319796s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4045198476\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4045198476\001\logs.txt: (1.3053256s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 config get cpus: exit status 14 (166.3163ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 config get cpus: exit status 14 (152.5569ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (630.4946ms)

                                                
                                                
-- stdout --
	* [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:57:47.291792    7988 out.go:360] Setting OutFile to fd 736 ...
	I1205 06:57:47.339250    7988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.339250    7988 out.go:374] Setting ErrFile to fd 1176...
	I1205 06:57:47.339250    7988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:47.353902    7988 out.go:368] Setting JSON to false
	I1205 06:57:47.359017    7988 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9125,"bootTime":1764908742,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:57:47.359017    7988 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:57:47.363018    7988 out.go:179] * [functional-247800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:57:47.365011    7988 notify.go:221] Checking for updates...
	I1205 06:57:47.367439    7988 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:57:47.369076    7988 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:47.371270    7988 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:57:47.373762    7988 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:47.376978    7988 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:47.379559    7988 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:57:47.379899    7988 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:47.491104    7988 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:57:47.495010    7988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:47.744923    7988 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:47.722713844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:47.748920    7988 out.go:179] * Using the docker driver based on existing profile
	I1205 06:57:47.751920    7988 start.go:309] selected driver: docker
	I1205 06:57:47.751920    7988 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:47.751920    7988 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:47.797935    7988 out.go:203] 
	W1205 06:57:47.799924    7988 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:57:47.804917    7988 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-247800 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-247800 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (681.6105ms)

                                                
                                                
-- stdout --
	* [functional-247800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:57:46.611860   12996 out.go:360] Setting OutFile to fd 1176 ...
	I1205 06:57:46.656619   12996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:46.656619   12996 out.go:374] Setting ErrFile to fd 836...
	I1205 06:57:46.656619   12996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:57:46.672769   12996 out.go:368] Setting JSON to false
	I1205 06:57:46.675921   12996 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9124,"bootTime":1764908742,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1205 06:57:46.675921   12996 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1205 06:57:46.690021   12996 out.go:179] * [functional-247800] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1205 06:57:46.693718   12996 notify.go:221] Checking for updates...
	I1205 06:57:46.696050   12996 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1205 06:57:46.698176   12996 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:57:46.700887   12996 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1205 06:57:46.703762   12996 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:57:46.705764   12996 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:57:46.708762   12996 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1205 06:57:46.709754   12996 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:57:46.830371   12996 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1205 06:57:46.833363   12996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:57:47.079942   12996 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-05 06:57:47.061532275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1205 06:57:47.085048   12996 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 06:57:47.087007   12996 start.go:309] selected driver: docker
	I1205 06:57:47.087078   12996 start.go:927] validating driver "docker" against &{Name:functional-247800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-247800 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:57:47.087412   12996 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:57:47.174782   12996 out.go:203] 
	W1205 06:57:47.176780   12996 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:57:47.179785   12996 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh -n functional-247800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cp functional-247800:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3149493486\001\cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh -n functional-247800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh -n functional-247800 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8036/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo cat /etc/test/nested/copy/8036/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8036.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo cat /etc/ssl/certs/8036.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8036.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo cat /usr/share/ca-certificates/8036.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/80362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo cat /etc/ssl/certs/80362.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/80362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo cat /usr/share/ca-certificates/80362.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 ssh "sudo systemctl is-active crio": exit status 1 (557.386ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (2.3595286s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "642.3326ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "161.5928ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "671.0638ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "162.0148ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 version -o=json --components: (1.7226132s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-247800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-247800
docker.io/kicbase/echo-server:functional-247800
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-247800 image ls --format short --alsologtostderr:
I1205 06:59:38.610466    7896 out.go:360] Setting OutFile to fd 1188 ...
I1205 06:59:38.654452    7896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:38.654452    7896 out.go:374] Setting ErrFile to fd 1304...
I1205 06:59:38.654452    7896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:38.670119    7896 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:38.670506    7896 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:38.677177    7896 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
I1205 06:59:38.735131    7896 ssh_runner.go:195] Run: systemctl --version
I1205 06:59:38.739184    7896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
I1205 06:59:38.791400    7896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
I1205 06:59:38.923683    7896 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-247800 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ docker.io/kicbase/echo-server               │ functional-247800 │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ localhost/my-image                          │ functional-247800 │ 03ae4a78a011d │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-247800 │ d2df05676814b │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-247800 image ls --format table --alsologtostderr:
I1205 06:59:45.335075    8940 out.go:360] Setting OutFile to fd 2028 ...
I1205 06:59:45.377596    8940 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:45.377596    8940 out.go:374] Setting ErrFile to fd 1072...
I1205 06:59:45.377596    8940 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:45.389640    8940 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:45.390745    8940 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:45.398051    8940 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
I1205 06:59:45.459514    8940 ssh_runner.go:195] Run: systemctl --version
I1205 06:59:45.462984    8940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
I1205 06:59:45.524524    8940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
I1205 06:59:45.663069    8940 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-247800 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"03ae4a78a011d00d3dc0feb5ef7a498a4c1bbb5186a61d043d2a3458761014f6","repoDigests":[],"repoTags":["localhost/my-image:functional-247800"],"size":"1240000"},{"id":"d2df05676814b0460eb5bc00e9587cffa1d06f480e1a9cac0b09c806935c238a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-247800"],"size":"30"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e
78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-247800"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["reg
istry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-247800 image ls --format json --alsologtostderr:
I1205 06:59:44.872769    7636 out.go:360] Setting OutFile to fd 1884 ...
I1205 06:59:44.914255    7636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:44.914255    7636 out.go:374] Setting ErrFile to fd 1904...
I1205 06:59:44.914255    7636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:44.926249    7636 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:44.926617    7636 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:44.933258    7636 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
I1205 06:59:44.991404    7636 ssh_runner.go:195] Run: systemctl --version
I1205 06:59:44.994688    7636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
I1205 06:59:45.048514    7636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
I1205 06:59:45.183441    7636 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-247800 image ls --format yaml --alsologtostderr:
- id: d2df05676814b0460eb5bc00e9587cffa1d06f480e1a9cac0b09c806935c238a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-247800
size: "30"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-247800
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-247800 image ls --format yaml --alsologtostderr:
I1205 06:59:39.069952    1292 out.go:360] Setting OutFile to fd 1780 ...
I1205 06:59:39.112123    1292 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:39.112123    1292 out.go:374] Setting ErrFile to fd 2024...
I1205 06:59:39.112123    1292 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:39.127037    1292 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:39.127653    1292 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:39.134050    1292 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
I1205 06:59:39.196621    1292 ssh_runner.go:195] Run: systemctl --version
I1205 06:59:39.200927    1292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
I1205 06:59:39.258010    1292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
I1205 06:59:39.403013    1292 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-247800 ssh pgrep buildkitd: exit status 1 (563.1382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image build -t localhost/my-image:functional-247800 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 image build -t localhost/my-image:functional-247800 testdata\build --alsologtostderr: (4.3061894s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-247800 image build -t localhost/my-image:functional-247800 testdata\build --alsologtostderr:
I1205 06:59:40.109796    2680 out.go:360] Setting OutFile to fd 1440 ...
I1205 06:59:40.167996    2680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:40.167996    2680 out.go:374] Setting ErrFile to fd 1092...
I1205 06:59:40.167996    2680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:59:40.180281    2680 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:40.185966    2680 config.go:182] Loaded profile config "functional-247800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1205 06:59:40.192253    2680 cli_runner.go:164] Run: docker container inspect functional-247800 --format={{.State.Status}}
I1205 06:59:40.254862    2680 ssh_runner.go:195] Run: systemctl --version
I1205 06:59:40.257463    2680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247800
I1205 06:59:40.311469    2680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55394 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-247800\id_rsa Username:docker}
I1205 06:59:40.449082    2680 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.768398577.tar
I1205 06:59:40.453604    2680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:59:40.473098    2680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.768398577.tar
I1205 06:59:40.480599    2680 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.768398577.tar: stat -c "%s %y" /var/lib/minikube/build/build.768398577.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.768398577.tar': No such file or directory
I1205 06:59:40.480599    2680 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.768398577.tar --> /var/lib/minikube/build/build.768398577.tar (3072 bytes)
I1205 06:59:40.513320    2680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.768398577
I1205 06:59:40.530696    2680 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.768398577 -xf /var/lib/minikube/build/build.768398577.tar
I1205 06:59:40.550801    2680 docker.go:361] Building image: /var/lib/minikube/build/build.768398577
I1205 06:59:40.554903    2680 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-247800 /var/lib/minikube/build/build.768398577
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:03ae4a78a011d00d3dc0feb5ef7a498a4c1bbb5186a61d043d2a3458761014f6 done
#8 naming to localhost/my-image:functional-247800 0.0s done
#8 DONE 0.2s
I1205 06:59:44.271491    2680 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-247800 /var/lib/minikube/build/build.768398577: (3.7164609s)
I1205 06:59:44.275616    2680 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.768398577
I1205 06:59:44.294389    2680 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.768398577.tar
I1205 06:59:44.309323    2680 build_images.go:218] Built localhost/my-image:functional-247800 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.768398577.tar
I1205 06:59:44.309323    2680 build_images.go:134] succeeded building to: functional-247800
I1205 06:59:44.309323    2680 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-247800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr: (2.6801949s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr: (2.4055191s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-247800
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-247800 image load --daemon kicbase/echo-server:functional-247800 --alsologtostderr: (2.4067797s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image save kicbase/echo-server:functional-247800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image rm kicbase/echo-server:functional-247800 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-247800
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 image save --daemon kicbase/echo-server:functional-247800 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-247800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-247800 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-247800 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-247800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-247800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-247800
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (224.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1205 07:02:23.922438    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:42.856225    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:42.863426    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:42.875508    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:42.897643    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:42.939592    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:43.021715    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:43.182813    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:43.504428    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:44.146598    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:45.428476    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:47.990283    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:53.113332    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:03.354973    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:23.837539    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:25.705023    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:04:04.800965    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:05:22.631992    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:05:26.724786    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m43.0666326s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: (1.6153689s)
--- PASS: TestMultiControlPlane/serial/StartCluster (224.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 kubectl -- rollout status deployment/busybox: (4.521891s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-q9xzw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-qswb8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-whkt7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-q9xzw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-qswb8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-whkt7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-q9xzw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-qswb8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-whkt7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-q9xzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-q9xzw -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-qswb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-qswb8 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-whkt7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 kubectl -- exec busybox-7b57f96db7-whkt7 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 node add --alsologtostderr -v 5: (53.3273468s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: (1.9277973s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-927600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0169581s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (34.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 status --output json --alsologtostderr -v 5: (1.9420407s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp testdata\cp-test.txt ha-927600:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile71426448\001\cp-test_ha-927600.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600:/home/docker/cp-test.txt ha-927600-m02:/home/docker/cp-test_ha-927600_ha-927600-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test_ha-927600_ha-927600-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600:/home/docker/cp-test.txt ha-927600-m03:/home/docker/cp-test_ha-927600_ha-927600-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test_ha-927600_ha-927600-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600:/home/docker/cp-test.txt ha-927600-m04:/home/docker/cp-test_ha-927600_ha-927600-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test_ha-927600_ha-927600-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp testdata\cp-test.txt ha-927600-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile71426448\001\cp-test_ha-927600-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m02:/home/docker/cp-test.txt ha-927600:/home/docker/cp-test_ha-927600-m02_ha-927600.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test_ha-927600-m02_ha-927600.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m02:/home/docker/cp-test.txt ha-927600-m03:/home/docker/cp-test_ha-927600-m02_ha-927600-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test_ha-927600-m02_ha-927600-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m02:/home/docker/cp-test.txt ha-927600-m04:/home/docker/cp-test_ha-927600-m02_ha-927600-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test_ha-927600-m02_ha-927600-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp testdata\cp-test.txt ha-927600-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile71426448\001\cp-test_ha-927600-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m03:/home/docker/cp-test.txt ha-927600:/home/docker/cp-test_ha-927600-m03_ha-927600.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test_ha-927600-m03_ha-927600.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m03:/home/docker/cp-test.txt ha-927600-m02:/home/docker/cp-test_ha-927600-m03_ha-927600-m02.txt
E1205 07:07:07.003068    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test_ha-927600-m03_ha-927600-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m03:/home/docker/cp-test.txt ha-927600-m04:/home/docker/cp-test_ha-927600-m03_ha-927600-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test_ha-927600-m03_ha-927600-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp testdata\cp-test.txt ha-927600-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile71426448\001\cp-test_ha-927600-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m04:/home/docker/cp-test.txt ha-927600:/home/docker/cp-test_ha-927600-m04_ha-927600.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600 "sudo cat /home/docker/cp-test_ha-927600-m04_ha-927600.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m04:/home/docker/cp-test.txt ha-927600-m02:/home/docker/cp-test_ha-927600-m04_ha-927600-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m02 "sudo cat /home/docker/cp-test_ha-927600-m04_ha-927600-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 cp ha-927600-m04:/home/docker/cp-test.txt ha-927600-m03:/home/docker/cp-test_ha-927600-m04_ha-927600-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 ssh -n ha-927600-m03 "sudo cat /home/docker/cp-test_ha-927600-m04_ha-927600-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (34.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 node stop m02 --alsologtostderr -v 5
E1205 07:07:23.927796    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 node stop m02 --alsologtostderr -v 5: (11.844542s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: exit status 7 (1.5792454s)

                                                
                                                
-- stdout --
	ha-927600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-927600-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927600-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-927600-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:07:30.314142    6548 out.go:360] Setting OutFile to fd 1396 ...
	I1205 07:07:30.361307    6548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:30.361307    6548 out.go:374] Setting ErrFile to fd 2044...
	I1205 07:07:30.361307    6548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:07:30.373610    6548 out.go:368] Setting JSON to false
	I1205 07:07:30.373610    6548 mustload.go:66] Loading cluster: ha-927600
	I1205 07:07:30.373610    6548 notify.go:221] Checking for updates...
	I1205 07:07:30.374166    6548 config.go:182] Loaded profile config "ha-927600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:07:30.374166    6548 status.go:174] checking status of ha-927600 ...
	I1205 07:07:30.382203    6548 cli_runner.go:164] Run: docker container inspect ha-927600 --format={{.State.Status}}
	I1205 07:07:30.440581    6548 status.go:371] ha-927600 host status = "Running" (err=<nil>)
	I1205 07:07:30.440581    6548 host.go:66] Checking if "ha-927600" exists ...
	I1205 07:07:30.444581    6548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927600
	I1205 07:07:30.500233    6548 host.go:66] Checking if "ha-927600" exists ...
	I1205 07:07:30.504232    6548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:07:30.508241    6548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927600
	I1205 07:07:30.558230    6548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57297 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-927600\id_rsa Username:docker}
	I1205 07:07:30.684625    6548 ssh_runner.go:195] Run: systemctl --version
	I1205 07:07:30.700864    6548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:30.725077    6548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-927600
	I1205 07:07:30.781284    6548 kubeconfig.go:125] found "ha-927600" server: "https://127.0.0.1:57301"
	I1205 07:07:30.781284    6548 api_server.go:166] Checking apiserver status ...
	I1205 07:07:30.786088    6548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:07:30.811179    6548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2301/cgroup
	I1205 07:07:30.825046    6548 api_server.go:182] apiserver freezer: "7:freezer:/docker/81e0f561081d6979af26ee12d10a14aa0d570a09251598a7c306275c5cb86ab2/kubepods/burstable/pod95ca73036f9dd4eae6916b238724c3c5/0e6e15d89530e93a2fbd8ef624961f550dd8b7ec6264718f0e85b09f6f3b0948"
	I1205 07:07:30.829768    6548 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/81e0f561081d6979af26ee12d10a14aa0d570a09251598a7c306275c5cb86ab2/kubepods/burstable/pod95ca73036f9dd4eae6916b238724c3c5/0e6e15d89530e93a2fbd8ef624961f550dd8b7ec6264718f0e85b09f6f3b0948/freezer.state
	I1205 07:07:30.843186    6548 api_server.go:204] freezer state: "THAWED"
	I1205 07:07:30.843186    6548 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57301/healthz ...
	I1205 07:07:30.854526    6548 api_server.go:279] https://127.0.0.1:57301/healthz returned 200:
	ok
	I1205 07:07:30.854526    6548 status.go:463] ha-927600 apiserver status = Running (err=<nil>)
	I1205 07:07:30.854779    6548 status.go:176] ha-927600 status: &{Name:ha-927600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:07:30.854779    6548 status.go:174] checking status of ha-927600-m02 ...
	I1205 07:07:30.864123    6548 cli_runner.go:164] Run: docker container inspect ha-927600-m02 --format={{.State.Status}}
	I1205 07:07:30.922237    6548 status.go:371] ha-927600-m02 host status = "Stopped" (err=<nil>)
	I1205 07:07:30.922237    6548 status.go:384] host is not running, skipping remaining checks
	I1205 07:07:30.922237    6548 status.go:176] ha-927600-m02 status: &{Name:ha-927600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:07:30.922237    6548 status.go:174] checking status of ha-927600-m03 ...
	I1205 07:07:30.930513    6548 cli_runner.go:164] Run: docker container inspect ha-927600-m03 --format={{.State.Status}}
	I1205 07:07:30.984926    6548 status.go:371] ha-927600-m03 host status = "Running" (err=<nil>)
	I1205 07:07:30.984926    6548 host.go:66] Checking if "ha-927600-m03" exists ...
	I1205 07:07:30.988877    6548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927600-m03
	I1205 07:07:31.044949    6548 host.go:66] Checking if "ha-927600-m03" exists ...
	I1205 07:07:31.051298    6548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:07:31.054131    6548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927600-m03
	I1205 07:07:31.109717    6548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57418 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-927600-m03\id_rsa Username:docker}
	I1205 07:07:31.295012    6548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:31.319949    6548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-927600
	I1205 07:07:31.378774    6548 kubeconfig.go:125] found "ha-927600" server: "https://127.0.0.1:57301"
	I1205 07:07:31.378774    6548 api_server.go:166] Checking apiserver status ...
	I1205 07:07:31.384341    6548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:07:31.407868    6548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2172/cgroup
	I1205 07:07:31.421948    6548 api_server.go:182] apiserver freezer: "7:freezer:/docker/a54ea2ba35bb81b80b9373f1c2cd11f976bcc21854b5fdeab67ebf5ca02ad150/kubepods/burstable/pod5df8e68b76fb0921c8435319abb58b97/0f67ae949cf006b47dfba7323d3776436beec38e45dd3fa15f9d050d15786477"
	I1205 07:07:31.426415    6548 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a54ea2ba35bb81b80b9373f1c2cd11f976bcc21854b5fdeab67ebf5ca02ad150/kubepods/burstable/pod5df8e68b76fb0921c8435319abb58b97/0f67ae949cf006b47dfba7323d3776436beec38e45dd3fa15f9d050d15786477/freezer.state
	I1205 07:07:31.440613    6548 api_server.go:204] freezer state: "THAWED"
	I1205 07:07:31.440613    6548 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57301/healthz ...
	I1205 07:07:31.454787    6548 api_server.go:279] https://127.0.0.1:57301/healthz returned 200:
	ok
	I1205 07:07:31.454787    6548 status.go:463] ha-927600-m03 apiserver status = Running (err=<nil>)
	I1205 07:07:31.454787    6548 status.go:176] ha-927600-m03 status: &{Name:ha-927600-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:07:31.454787    6548 status.go:174] checking status of ha-927600-m04 ...
	I1205 07:07:31.462580    6548 cli_runner.go:164] Run: docker container inspect ha-927600-m04 --format={{.State.Status}}
	I1205 07:07:31.515586    6548 status.go:371] ha-927600-m04 host status = "Running" (err=<nil>)
	I1205 07:07:31.515586    6548 host.go:66] Checking if "ha-927600-m04" exists ...
	I1205 07:07:31.520535    6548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927600-m04
	I1205 07:07:31.574743    6548 host.go:66] Checking if "ha-927600-m04" exists ...
	I1205 07:07:31.581012    6548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:07:31.584853    6548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927600-m04
	I1205 07:07:31.640769    6548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57548 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-927600-m04\id_rsa Username:docker}
	I1205 07:07:31.774862    6548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:07:31.793811    6548 status.go:176] ha-927600-m04 status: &{Name:ha-927600-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5872501s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 node start m02 --alsologtostderr -v 5
E1205 07:07:42.861378    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:08:10.569118    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 node start m02 --alsologtostderr -v 5: (47.5805868s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: (2.2037209s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0013158s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (203.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 stop --alsologtostderr -v 5: (39.5734652s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 start --wait true --alsologtostderr -v 5
E1205 07:10:22.636617    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 start --wait true --alsologtostderr -v 5: (2m43.5650134s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (203.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 node delete m03 --alsologtostderr -v 5: (12.4790285s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: (1.484729s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4855452s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 stop --alsologtostderr -v 5
E1205 07:12:23.931591    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 stop --alsologtostderr -v 5: (35.5736786s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: exit status 7 (337.3591ms)

                                                
                                                
-- stdout --
	ha-927600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927600-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927600-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:12:40.290437    2028 out.go:360] Setting OutFile to fd 1004 ...
	I1205 07:12:40.332822    2028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:12:40.332822    2028 out.go:374] Setting ErrFile to fd 1692...
	I1205 07:12:40.332822    2028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:12:40.344498    2028 out.go:368] Setting JSON to false
	I1205 07:12:40.344658    2028 mustload.go:66] Loading cluster: ha-927600
	I1205 07:12:40.344740    2028 notify.go:221] Checking for updates...
	I1205 07:12:40.345314    2028 config.go:182] Loaded profile config "ha-927600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:12:40.345314    2028 status.go:174] checking status of ha-927600 ...
	I1205 07:12:40.353102    2028 cli_runner.go:164] Run: docker container inspect ha-927600 --format={{.State.Status}}
	I1205 07:12:40.406429    2028 status.go:371] ha-927600 host status = "Stopped" (err=<nil>)
	I1205 07:12:40.406429    2028 status.go:384] host is not running, skipping remaining checks
	I1205 07:12:40.406429    2028 status.go:176] ha-927600 status: &{Name:ha-927600 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:12:40.406429    2028 status.go:174] checking status of ha-927600-m02 ...
	I1205 07:12:40.414263    2028 cli_runner.go:164] Run: docker container inspect ha-927600-m02 --format={{.State.Status}}
	I1205 07:12:40.468976    2028 status.go:371] ha-927600-m02 host status = "Stopped" (err=<nil>)
	I1205 07:12:40.468976    2028 status.go:384] host is not running, skipping remaining checks
	I1205 07:12:40.468976    2028 status.go:176] ha-927600-m02 status: &{Name:ha-927600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:12:40.468976    2028 status.go:174] checking status of ha-927600-m04 ...
	I1205 07:12:40.476404    2028 cli_runner.go:164] Run: docker container inspect ha-927600-m04 --format={{.State.Status}}
	I1205 07:12:40.531697    2028 status.go:371] ha-927600-m04 host status = "Stopped" (err=<nil>)
	I1205 07:12:40.531697    2028 status.go:384] host is not running, skipping remaining checks
	I1205 07:12:40.531697    2028 status.go:176] ha-927600-m04 status: &{Name:ha-927600-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (122.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 start --wait true --alsologtostderr -v 5 --driver=docker
E1205 07:12:42.865759    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 start --wait true --alsologtostderr -v 5 --driver=docker: (2m0.758172s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: (1.4614766s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (122.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5084744s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 node add --control-plane --alsologtostderr -v 5
E1205 07:15:22.640835    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 node add --control-plane --alsologtostderr -v 5: (1m18.0855487s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-927600 status --alsologtostderr -v 5: (1.9787217s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9960843s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.00s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (50.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-732200 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-732200 --driver=docker: (50.4743704s)
--- PASS: TestImageBuild/serial/Setup (50.48s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-732200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-732200: (4.0936421s)
--- PASS: TestImageBuild/serial/NormalBuild (4.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-732200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-732200: (2.3087213s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-732200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-732200: (1.2296958s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-732200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-732200: (1.2650209s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-231100 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
E1205 07:17:23.935830    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:17:42.870366    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-231100 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m17.5236594s)
--- PASS: TestJSONOutput/start/Command (77.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.16s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-231100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-231100 --output=json --user=testUser: (1.1604903s)
--- PASS: TestJSONOutput/pause/Command (1.16s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.89s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-231100 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.89s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-231100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-231100 --output=json --user=testUser: (12.1103807s)
--- PASS: TestJSONOutput/stop/Command (12.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.68s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-944000 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-944000 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (210.0769ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"976293eb-6aa7-4f8e-9409-f9344f1d1476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-944000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee38a7d0-626f-45b9-9192-d5157f8920ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"bbb55c3e-74b6-4b54-9706-5f34cb15198e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a89ffa26-b8df-40cf-bd86-046ac87631cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"554614af-2b7e-4db4-9798-1095987e0ca9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"5f4779a3-9ca3-4454-9240-1f074b912dfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"65c5f82f-5311-405e-bd0f-0ad4307fea70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-944000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-944000
--- PASS: TestErrorJSONOutput (0.68s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (54.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-078100 --network=
E1205 07:19:05.941449    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-078100 --network=: (50.7614074s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-078100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-078100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-078100: (3.6378608s)
--- PASS: TestKicCustomNetwork/create_custom_network (54.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (53.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-164100 --network=bridge
E1205 07:20:05.722981    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:22.645796    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-164100 --network=bridge: (50.197503s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-164100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-164100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-164100: (2.9932162s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (53.25s)

                                                
                                    
x
+
TestKicExistingNetwork (55.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1205 07:20:46.513439    8036 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1205 07:20:46.571835    8036 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1205 07:20:46.575302    8036 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1205 07:20:46.575302    8036 cli_runner.go:164] Run: docker network inspect existing-network
W1205 07:20:46.626613    8036 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1205 07:20:46.626613    8036 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1205 07:20:46.626613    8036 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1205 07:20:46.629616    8036 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 07:20:46.697612    8036 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006f5380}
I1205 07:20:46.697612    8036 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1205 07:20:46.700620    8036 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1205 07:20:46.759436    8036 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1205 07:20:46.759436    8036 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1205 07:20:46.759436    8036 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1205 07:20:46.792158    8036 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1205 07:20:46.805142    8036 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015b08d0}
I1205 07:20:46.805142    8036 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1205 07:20:46.808143    8036 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1205 07:20:46.956962    8036 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-962000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-962000 --network=existing-network: (51.3273581s)
helpers_test.go:175: Cleaning up "existing-network-962000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-962000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-962000: (3.1899155s)
I1205 07:21:41.543445    8036 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (55.09s)

                                                
                                    
x
+
TestKicCustomSubnet (56.69s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-404500 --subnet=192.168.60.0/24
E1205 07:22:23.940734    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-404500 --subnet=192.168.60.0/24: (53.139183s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-404500 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-404500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-404500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-404500: (3.4885841s)
--- PASS: TestKicCustomSubnet (56.69s)

                                                
                                    
x
+
TestKicStaticIP (53.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-167800 --static-ip=192.168.200.200
E1205 07:22:42.874259    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-167800 --static-ip=192.168.200.200: (49.7354202s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-167800 ip
helpers_test.go:175: Cleaning up "static-ip-167800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-167800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-167800: (3.5471379s)
--- PASS: TestKicStaticIP (53.59s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (100.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-709400 --driver=docker
E1205 07:23:47.020174    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-709400 --driver=docker: (46.4931872s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-709400 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-709400 --driver=docker: (44.2974668s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-709400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.2138047s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-709400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1812391s)
helpers_test.go:175: Cleaning up "second-709400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-709400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-709400: (3.7183156s)
helpers_test.go:175: Cleaning up "first-709400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-709400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-709400: (3.6207212s)
--- PASS: TestMinikubeProfile (100.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (13.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-473400 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1364330192\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
E1205 07:25:22.650530    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-473400 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1364330192\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (12.6268482s)
--- PASS: TestMountStart/serial/StartWithMountFirst (13.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-473400 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-473400 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1364330192\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-473400 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1364330192\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.8733667s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-473400 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.4s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-473400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-473400 --alsologtostderr -v=5: (2.3964372s)
--- PASS: TestMountStart/serial/DeleteFirst (2.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-473400 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.56s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-473400
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-473400: (1.8742768s)
--- PASS: TestMountStart/serial/Stop (1.87s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-473400
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-473400: (9.8591566s)
--- PASS: TestMountStart/serial/RestartStopped (10.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-473400 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-994000 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1205 07:27:23.944737    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:27:42.879573    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-994000 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m9.5027393s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- rollout status deployment/busybox: (3.9011721s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-nm62h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-tgcsb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-nm62h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-tgcsb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-nm62h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-tgcsb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-nm62h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-nm62h -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-tgcsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-994000 -- exec busybox-7b57f96db7-tgcsb -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-994000 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-994000 -v=5 --alsologtostderr: (52.8467471s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr: (1.3337072s)
--- PASS: TestMultiNode/serial/AddNode (54.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-994000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.390438s)
--- PASS: TestMultiNode/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (19.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-994000 status --output json --alsologtostderr: (1.3232367s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp testdata\cp-test.txt multinode-994000:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2195612109\001\cp-test_multinode-994000.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000:/home/docker/cp-test.txt multinode-994000-m02:/home/docker/cp-test_multinode-994000_multinode-994000-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m02 "sudo cat /home/docker/cp-test_multinode-994000_multinode-994000-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000:/home/docker/cp-test.txt multinode-994000-m03:/home/docker/cp-test_multinode-994000_multinode-994000-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m03 "sudo cat /home/docker/cp-test_multinode-994000_multinode-994000-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp testdata\cp-test.txt multinode-994000-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2195612109\001\cp-test_multinode-994000-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000-m02:/home/docker/cp-test.txt multinode-994000:/home/docker/cp-test_multinode-994000-m02_multinode-994000.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000 "sudo cat /home/docker/cp-test_multinode-994000-m02_multinode-994000.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000-m02:/home/docker/cp-test.txt multinode-994000-m03:/home/docker/cp-test_multinode-994000-m02_multinode-994000-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m03 "sudo cat /home/docker/cp-test_multinode-994000-m02_multinode-994000-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp testdata\cp-test.txt multinode-994000-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2195612109\001\cp-test_multinode-994000-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000-m03:/home/docker/cp-test.txt multinode-994000:/home/docker/cp-test_multinode-994000-m03_multinode-994000.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000 "sudo cat /home/docker/cp-test_multinode-994000-m03_multinode-994000.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 cp multinode-994000-m03:/home/docker/cp-test.txt multinode-994000-m02:/home/docker/cp-test_multinode-994000-m03_multinode-994000-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 ssh -n multinode-994000-m02 "sudo cat /home/docker/cp-test_multinode-994000-m03_multinode-994000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (19.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-994000 node stop m03: (1.7268451s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-994000 status: exit status 7 (1.0286742s)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr: exit status 7 (1.0702484s)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:29:38.715278   14272 out.go:360] Setting OutFile to fd 1160 ...
	I1205 07:29:38.760676   14272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:29:38.760676   14272 out.go:374] Setting ErrFile to fd 1156...
	I1205 07:29:38.760676   14272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:29:38.771430   14272 out.go:368] Setting JSON to false
	I1205 07:29:38.771430   14272 mustload.go:66] Loading cluster: multinode-994000
	I1205 07:29:38.771430   14272 notify.go:221] Checking for updates...
	I1205 07:29:38.772853   14272 config.go:182] Loaded profile config "multinode-994000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:29:38.772881   14272 status.go:174] checking status of multinode-994000 ...
	I1205 07:29:38.780876   14272 cli_runner.go:164] Run: docker container inspect multinode-994000 --format={{.State.Status}}
	I1205 07:29:38.837940   14272 status.go:371] multinode-994000 host status = "Running" (err=<nil>)
	I1205 07:29:38.837940   14272 host.go:66] Checking if "multinode-994000" exists ...
	I1205 07:29:38.842770   14272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994000
	I1205 07:29:38.898313   14272 host.go:66] Checking if "multinode-994000" exists ...
	I1205 07:29:38.903897   14272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:29:38.907777   14272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994000
	I1205 07:29:38.975269   14272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58724 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-994000\id_rsa Username:docker}
	I1205 07:29:39.112123   14272 ssh_runner.go:195] Run: systemctl --version
	I1205 07:29:39.129576   14272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:29:39.153845   14272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-994000
	I1205 07:29:39.209323   14272 kubeconfig.go:125] found "multinode-994000" server: "https://127.0.0.1:58728"
	I1205 07:29:39.209362   14272 api_server.go:166] Checking apiserver status ...
	I1205 07:29:39.214660   14272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:29:39.241870   14272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2287/cgroup
	I1205 07:29:39.255882   14272 api_server.go:182] apiserver freezer: "7:freezer:/docker/8c28dc6d3244eae7ceddbbe8b10f7cc9e0c7de84abfe638a238e8fb798096049/kubepods/burstable/podd7b19d9d8dac74b0fb90fdd301cdc696/199835576cd496c1d835f966e830a1e016c80132adc63235384297b46d5f965e"
	I1205 07:29:39.259866   14272 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8c28dc6d3244eae7ceddbbe8b10f7cc9e0c7de84abfe638a238e8fb798096049/kubepods/burstable/podd7b19d9d8dac74b0fb90fdd301cdc696/199835576cd496c1d835f966e830a1e016c80132adc63235384297b46d5f965e/freezer.state
	I1205 07:29:39.273807   14272 api_server.go:204] freezer state: "THAWED"
	I1205 07:29:39.273807   14272 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58728/healthz ...
	I1205 07:29:39.286689   14272 api_server.go:279] https://127.0.0.1:58728/healthz returned 200:
	ok
	I1205 07:29:39.286689   14272 status.go:463] multinode-994000 apiserver status = Running (err=<nil>)
	I1205 07:29:39.286689   14272 status.go:176] multinode-994000 status: &{Name:multinode-994000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:29:39.286689   14272 status.go:174] checking status of multinode-994000-m02 ...
	I1205 07:29:39.296188   14272 cli_runner.go:164] Run: docker container inspect multinode-994000-m02 --format={{.State.Status}}
	I1205 07:29:39.350435   14272 status.go:371] multinode-994000-m02 host status = "Running" (err=<nil>)
	I1205 07:29:39.350435   14272 host.go:66] Checking if "multinode-994000-m02" exists ...
	I1205 07:29:39.354666   14272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994000-m02
	I1205 07:29:39.408112   14272 host.go:66] Checking if "multinode-994000-m02" exists ...
	I1205 07:29:39.414057   14272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:29:39.417417   14272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994000-m02
	I1205 07:29:39.471597   14272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58778 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-994000-m02\id_rsa Username:docker}
	I1205 07:29:39.610253   14272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:29:39.628170   14272 status.go:176] multinode-994000-m02 status: &{Name:multinode-994000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:29:39.628170   14272 status.go:174] checking status of multinode-994000-m03 ...
	I1205 07:29:39.635048   14272 cli_runner.go:164] Run: docker container inspect multinode-994000-m03 --format={{.State.Status}}
	I1205 07:29:39.690170   14272 status.go:371] multinode-994000-m03 host status = "Stopped" (err=<nil>)
	I1205 07:29:39.690170   14272 status.go:384] host is not running, skipping remaining checks
	I1205 07:29:39.690170   14272 status.go:176] multinode-994000-m03 status: &{Name:multinode-994000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-994000 node start m03 -v=5 --alsologtostderr: (11.7872404s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-994000 status -v=5 --alsologtostderr: (1.3407609s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-994000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-994000
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-994000: (24.7594619s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-994000 --wait=true -v=5 --alsologtostderr
E1205 07:30:22.655195    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-994000 --wait=true -v=5 --alsologtostderr: (58.5035286s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-994000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-994000 node delete m03: (6.9068998s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-994000 stop: (23.5596153s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-994000 status: exit status 7 (266.9203ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr: exit status 7 (274.99ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:31:48.601461   12600 out.go:360] Setting OutFile to fd 1628 ...
	I1205 07:31:48.644583   12600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:31:48.644664   12600 out.go:374] Setting ErrFile to fd 1512...
	I1205 07:31:48.644735   12600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:31:48.654830   12600 out.go:368] Setting JSON to false
	I1205 07:31:48.654830   12600 mustload.go:66] Loading cluster: multinode-994000
	I1205 07:31:48.654830   12600 notify.go:221] Checking for updates...
	I1205 07:31:48.655936   12600 config.go:182] Loaded profile config "multinode-994000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1205 07:31:48.655936   12600 status.go:174] checking status of multinode-994000 ...
	I1205 07:31:48.663831   12600 cli_runner.go:164] Run: docker container inspect multinode-994000 --format={{.State.Status}}
	I1205 07:31:48.717306   12600 status.go:371] multinode-994000 host status = "Stopped" (err=<nil>)
	I1205 07:31:48.717306   12600 status.go:384] host is not running, skipping remaining checks
	I1205 07:31:48.717306   12600 status.go:176] multinode-994000 status: &{Name:multinode-994000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:31:48.718306   12600 status.go:174] checking status of multinode-994000-m02 ...
	I1205 07:31:48.725462   12600 cli_runner.go:164] Run: docker container inspect multinode-994000-m02 --format={{.State.Status}}
	I1205 07:31:48.779043   12600 status.go:371] multinode-994000-m02 host status = "Stopped" (err=<nil>)
	I1205 07:31:48.779043   12600 status.go:384] host is not running, skipping remaining checks
	I1205 07:31:48.779043   12600 status.go:176] multinode-994000-m02 status: &{Name:multinode-994000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (62.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-994000 --wait=true -v=5 --alsologtostderr --driver=docker
E1205 07:32:23.950163    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:32:42.884326    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-994000 --wait=true -v=5 --alsologtostderr --driver=docker: (1m1.4978512s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-994000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (62.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-994000
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-994000-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-994000-m02 --driver=docker: exit status 14 (200.069ms)

                                                
                                                
-- stdout --
	* [multinode-994000-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-994000-m02' is duplicated with machine name 'multinode-994000-m02' in profile 'multinode-994000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-994000-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-994000-m03 --driver=docker: (46.0921545s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-994000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-994000: exit status 80 (642.8432ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-994000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-994000-m03 already exists in multinode-994000-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_6.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-994000-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-994000-m03: (3.7375161s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.82s)

                                                
                                    
x
+
TestPreload (164.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-421600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
E1205 07:35:22.659603    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-421600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m38.1482105s)
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-421600 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-421600 image pull gcr.io/k8s-minikube/busybox: (2.1154506s)
preload_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-421600
preload_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-421600: (12.0021655s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-421600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
E1205 07:35:45.959473    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-421600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (48.1899305s)
preload_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-421600 image list
helpers_test.go:175: Cleaning up "test-preload-421600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-421600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-421600: (3.6195486s)
--- PASS: TestPreload (164.56s)

                                                
                                    
x
+
TestScheduledStopWindows (114.38s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-886800 --memory=3072 --driver=docker
E1205 07:36:45.740990    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-886800 --memory=3072 --driver=docker: (48.0039855s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-886800 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-886800 -n scheduled-stop-886800
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-886800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-886800 --schedule 5s
E1205 07:37:23.954595    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-886800 --schedule 5s: (1.0725357s)
minikube stop output:

                                                
                                                
E1205 07:37:42.889480    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-886800
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-886800: exit status 7 (218.2029ms)

                                                
                                                
-- stdout --
	scheduled-stop-886800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-886800 -n scheduled-stop-886800
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-886800 -n scheduled-stop-886800: exit status 7 (219.1968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-886800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-886800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-886800: (2.5133947s)
--- PASS: TestScheduledStopWindows (114.38s)

                                                
                                    
x
+
TestInsufficientStorage (28.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-884000 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-884000 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (24.6940271s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"65a7584f-669a-4474-946c-c4456897934e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-884000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9347875-f26a-46ab-b9dc-427aa3c33d21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"289b6290-e020-4471-b483-8b143fe52ef6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2cf345c9-ef30-4a3c-996a-20f8e1d90b83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"cdeca97b-8ba4-4fc9-a384-026596e78401","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"eea69d29-72ed-4caf-a7c3-bcfef7f414c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f3694336-71be-4d05-9c76-6ab52fec4690","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"caed7d3c-ede8-492c-a2f3-8dab138fbe45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7437d440-cf27-40d3-9cda-bf289cb07334","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"46fdf9f3-d7be-4d87-b718-1e118dba1309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"b9c98bd0-96f7-4505-8a1b-ef799e6b39f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-884000\" primary control-plane node in \"insufficient-storage-884000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5db24e4-bbd8-450a-b45e-5c643d7494ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d42f71c5-a4c7-4e55-aef8-b4a605459312","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7a17df3-feb0-4b13-a014-da6844affa8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-884000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-884000 --output=json --layout=cluster: exit status 7 (580.1723ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-884000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-884000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:38:53.144111    9852 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-884000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-884000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-884000 --output=json --layout=cluster: exit status 7 (590.9376ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-884000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-884000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:38:53.733944    3004 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-884000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1205 07:38:53.756480    3004 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-884000\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-884000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-884000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-884000: (2.68152s)
--- PASS: TestInsufficientStorage (28.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (220.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.4014919039.exe start -p running-upgrade-304500 --memory=3072 --vm-driver=docker
E1205 07:42:42.892860    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.4014919039.exe start -p running-upgrade-304500 --memory=3072 --vm-driver=docker: (57.7539274s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-304500 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-304500 --memory=3072 --alsologtostderr -v=1 --driver=docker: (2m32.5994069s)
helpers_test.go:175: Cleaning up "running-upgrade-304500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-304500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-304500: (9.0115534s)
--- PASS: TestRunningBinaryUpgrade (220.31s)

                                                
                                    
x
+
TestMissingContainerUpgrade (126.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.355907731.exe start -p missing-upgrade-943600 --memory=3072 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.355907731.exe start -p missing-upgrade-943600 --memory=3072 --driver=docker: (49.3985746s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-943600
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-943600: (1.4976774s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-943600
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-943600 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-943600 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m10.5821277s)
helpers_test.go:175: Cleaning up "missing-upgrade-943600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-943600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-943600: (4.1580584s)
--- PASS: TestMissingContainerUpgrade (126.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (286.9688ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-852300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestPause/serial/Start (127.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-852300 --memory=3072 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-852300 --memory=3072 --install-addons=false --wait=all --driver=docker: (2m7.4926522s)
--- PASS: TestPause/serial/Start (127.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m32.3513323s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-852300 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (410.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3586917875.exe start -p stopped-upgrade-852300 --memory=3072 --vm-driver=docker
E1205 07:40:22.664066    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:40:27.038634    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3586917875.exe start -p stopped-upgrade-852300 --memory=3072 --vm-driver=docker: (1m56.5269041s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3586917875.exe -p stopped-upgrade-852300 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3586917875.exe -p stopped-upgrade-852300 stop: (12.7453379s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-852300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-852300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m41.1183578s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (410.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (17.7734931s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-852300 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-852300 status -o json: exit status 2 (621.2111ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-852300","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-852300
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-852300: (2.8107937s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (15.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (15.8606135s)
--- PASS: TestNoKubernetes/serial/Start (15.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-852300 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-852300 --alsologtostderr -v=1 --driver=docker: (1m0.0340051s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (60.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-852300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-852300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (739.2271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.9024225s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.6275041s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (6.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-852300
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-852300: (6.6950791s)
--- PASS: TestNoKubernetes/serial/Stop (6.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (12.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --driver=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-852300 --driver=docker: (12.0409718s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (12.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-852300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-852300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (569.551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.57s)

                                                
                                    
x
+
TestPause/serial/Pause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-852300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-852300 --alsologtostderr -v=5: (1.0422855s)
--- PASS: TestPause/serial/Pause (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-852300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-852300 --output=json --layout=cluster: exit status 2 (624.2347ms)

                                                
                                                
-- stdout --
	{"Name":"pause-852300","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-852300","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.62s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-852300 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.25s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-852300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-852300 --alsologtostderr -v=5: (1.2511726s)
--- PASS: TestPause/serial/PauseAgain (1.25s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-852300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-852300 --alsologtostderr -v=5: (3.7510261s)
--- PASS: TestPause/serial/DeletePaused (3.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (20.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (20.2889522s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-852300
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-852300: exit status 1 (55.002ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-852300: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (20.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-852300
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-852300: (1.5761832s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (95.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-648900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-648900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m35.4172755s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (95.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-648900 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cfba4fd3-cad3-493b-bdb1-b901e297f605] Pending
helpers_test.go:352: "busybox" [cfba4fd3-cad3-493b-bdb1-b901e297f605] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cfba4fd3-cad3-493b-bdb1-b901e297f605] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0063362s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-648900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-648900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-648900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3828544s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-648900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-648900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-648900 --alsologtostderr -v=3: (12.0858618s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-648900 -n old-k8s-version-648900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-648900 -n old-k8s-version-648900: exit status 7 (207.9313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-648900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (33.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-648900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-648900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (32.9112771s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-648900 -n old-k8s-version-648900
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (33.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gs66x" [aa48b4c4-6a8b-464f-92db-77acdb81180c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gs66x" [aa48b4c4-6a8b-464f-92db-77acdb81180c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.0056664s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-237800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-237800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (1m34.6979007s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gs66x" [aa48b4c4-6a8b-464f-92db-77acdb81180c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.1694909s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-648900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-648900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-648900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-648900 --alsologtostderr -v=1: (1.5169972s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-648900 -n old-k8s-version-648900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-648900 -n old-k8s-version-648900: exit status 2 (674.034ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-648900 -n old-k8s-version-648900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-648900 -n old-k8s-version-648900: exit status 2 (787.9318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-648900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-648900 --alsologtostderr -v=1: (1.0955866s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-648900 -n old-k8s-version-648900
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-648900 -n old-k8s-version-648900: (1.1416055s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-648900 -n old-k8s-version-648900
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-944500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
E1205 07:50:22.673361    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-944500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (1m21.4281088s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-237800 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3346bf11-1acc-40cc-91a0-2563dc769024] Pending
helpers_test.go:352: "busybox" [3346bf11-1acc-40cc-91a0-2563dc769024] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3346bf11-1acc-40cc-91a0-2563dc769024] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0060939s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-237800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-237800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-237800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3376376s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-237800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-237800 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-237800 --alsologtostderr -v=3: (12.4889772s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-944500 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b937fde0-73db-483f-a93e-c7ca209a6a63] Pending
helpers_test.go:352: "busybox" [b937fde0-73db-483f-a93e-c7ca209a6a63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b937fde0-73db-483f-a93e-c7ca209a6a63] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0067138s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-944500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-944500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-944500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.4254431s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-944500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-944500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-944500 --alsologtostderr -v=3: (12.3610469s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-237800 -n embed-certs-237800
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-237800 -n embed-certs-237800: exit status 7 (225.3381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-237800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-237800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-237800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (48.8232825s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-237800 -n embed-certs-237800
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500: exit status 7 (208.992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-944500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-944500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
E1205 07:52:23.968471    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:52:25.977247    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-944500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (51.9860179s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gn4md" [15b36597-2c4c-42a1-be08-37c437f248b0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0149564s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gn4md" [15b36597-2c4c-42a1-be08-37c437f248b0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0082278s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-237800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-237800 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-237800 --alsologtostderr -v=1
E1205 07:52:42.902972    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-237800 --alsologtostderr -v=1: (1.1584354s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-237800 -n embed-certs-237800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-237800 -n embed-certs-237800: exit status 2 (647.4957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-237800 -n embed-certs-237800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-237800 -n embed-certs-237800: exit status 2 (633.0533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-237800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-237800 --alsologtostderr -v=1: (1.0322388s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-237800 -n embed-certs-237800
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-237800 -n embed-certs-237800: (1.0294192s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-237800 -n embed-certs-237800
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k2wpm" [3114aaf4-7a0e-492a-a5a7-6f45cc8ed12e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.310282s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k2wpm" [3114aaf4-7a0e-492a-a5a7-6f45cc8ed12e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0093855s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-944500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-944500 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-944500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-944500 --alsologtostderr -v=1: (1.2869538s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500: exit status 2 (683.7057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500: exit status 2 (680.6441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-944500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-944500 --alsologtostderr -v=1: (1.0919427s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500: (1.1848532s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-944500 -n default-k8s-diff-port-944500
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
E1205 07:53:25.759397    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-088800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.064628    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.071798    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.083909    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.105307    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.147984    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.229942    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.391513    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:29.713172    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:30.355759    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:31.638301    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:34.200604    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:39.323805    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:53:49.566990    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:54:10.049638    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m26.2254216s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-218000 "pgrep -a kubelet"
I1205 07:54:35.612845    8036 config.go:182] Loaded profile config "auto-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-218000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vhzht" [00d2bc51-3e69-405d-b48a-37f624ece958] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vhzht" [00d2bc51-3e69-405d-b48a-37f624ece958] Running
E1205 07:54:51.012754    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.0068808s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m27.6560556s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m36.7056636s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-64dx4" [ce833750-d3df-4959-9fef-a30323840dd2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0063435s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-218000 "pgrep -a kubelet"
I1205 07:56:58.322468    8036 config.go:182] Loaded profile config "kindnet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-218000 replace --force -f testdata\netcat-deployment.yaml
I1205 07:56:58.972154    8036 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1205 07:56:58.973145    8036 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ttmck" [45dfd8dc-342a-49d2-bdc7-8564e2f95562] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 07:57:07.057042    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ttmck" [45dfd8dc-342a-49d2-bdc7-8564e2f95562] Running
E1205 07:57:10.833857    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.0059505s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-cdh2m" [d1d36eb6-6b98-4963-a944-bab5d51e7644] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1205 07:57:23.973750    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-cdh2m" [d1d36eb6-6b98-4963-a944-bab5d51e7644] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016191s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-218000 "pgrep -a kubelet"
I1205 07:57:25.624643    8036 config.go:182] Loaded profile config "calico-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-218000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jnbdc" [0b3110c5-6da7-4790-ad6a-fbb7fcf04a6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jnbdc" [0b3110c5-6da7-4790-ad6a-fbb7fcf04a6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.0082292s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E1205 07:57:51.797317    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m13.1585129s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-104100 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-104100 --alsologtostderr -v=3: (5.1143487s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-104100 -n no-preload-104100: exit status 7 (239.2613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-104100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (75.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
E1205 07:58:29.067971    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:58:56.780526    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m15.8031468s)
--- PASS: TestNetworkPlugins/group/false/Start (75.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-218000 "pgrep -a kubelet"
I1205 07:59:04.451722    8036 config.go:182] Loaded profile config "custom-flannel-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-218000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l2xpq" [c6a74208-dbd4-42a0-9b14-27e6de47102f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 07:59:13.721358    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-l2xpq" [c6a74208-dbd4-42a0-9b14-27e6de47102f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.0063295s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-218000 "pgrep -a kubelet"
I1205 07:59:35.677829    8036 config.go:182] Loaded profile config "false-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-218000 replace --force -f testdata\netcat-deployment.yaml
E1205 07:59:36.108104    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:36.116117    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:36.128100    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:36.150119    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q6k77" [4161db44-3fa7-48fb-972d-30b418ba6f97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 07:59:36.193100    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:36.275704    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:36.437418    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:36.759016    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:37.400902    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:38.682402    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:41.244531    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-q6k77" [4161db44-3fa7-48fb-972d-30b418ba6f97] Running
E1205 07:59:46.366648    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.0180196s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (95.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E1205 07:59:56.609349    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:00:17.091568    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m35.9881689s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (95.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E1205 08:00:58.055040    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:29.850563    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-944500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m6.9731351s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-218000 "pgrep -a kubelet"
I1205 08:01:31.436665    8036 config.go:182] Loaded profile config "enable-default-cni-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-218000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g22zq" [96aefc61-6849-4639-b054-08d5ec30d3ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g22zq" [96aefc61-6849-4639-b054-08d5ec30d3ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.0055209s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tnmwf" [7fa181bf-b2dc-4c26-b8a5-771c8cbfa670] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0075662s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-218000 "pgrep -a kubelet"
I1205 08:01:40.806797    8036 config.go:182] Loaded profile config "flannel-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-218000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xcf98" [64255cf9-4d04-4462-937f-e78ed65b7b6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xcf98" [64255cf9-4d04-4462-937f-e78ed65b7b6a] Running
E1205 08:01:51.644547    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:51.651300    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:51.662814    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:51.684882    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:51.727443    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:51.809997    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:51.971642    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:52.293695    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:52.935374    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:54.217313    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.0071224s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1205 08:01:56.779280    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1205 08:02:19.978321    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.029079    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.037086    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.050079    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.073094    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.116072    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.199061    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.362056    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:20.684427    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:21.326550    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:22.608959    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:23.978716    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-925500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:25.171652    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m35.4367762s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (87.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E1205 08:02:40.535928    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:42.912290    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-247800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:03:01.019017    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:03:13.590006    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:03:29.073776    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-648900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:03:41.981752    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m27.0980877s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (87.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-042100 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-042100 --alsologtostderr -v=3: (1.9098272s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-042100 -n newest-cni-042100: exit status 7 (219.0636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-042100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-218000 "pgrep -a kubelet"
I1205 08:03:55.430477    8036 config.go:182] Loaded profile config "bridge-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-218000 replace --force -f testdata\netcat-deployment.yaml
I1205 08:03:56.022296    8036 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2jf4p" [268da1b4-e341-4fae-b443-29fbcdda49fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 08:04:04.956798    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:04.963649    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:04.976216    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:04.998707    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:05.041626    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2jf4p" [268da1b4-e341-4fae-b443-29fbcdda49fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.0073052s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-218000 "pgrep -a kubelet"
E1205 08:04:05.123144    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:05.285671    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:05.607826    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1205 08:04:05.685470    8036 config.go:182] Loaded profile config "kubenet-218000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-218000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qzlnq" [13d5231d-6a4a-48e6-8ce2-b9160d428450] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 08:04:06.249219    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:07.532030    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:10.094061    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qzlnq" [13d5231d-6a4a-48e6-8ce2-b9160d428450] Running
E1205 08:04:15.218095    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.0079285s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-218000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-218000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.24s)
E1205 08:04:56.647076    8036 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-218000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-042100 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                    

Test skip (34/427)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.21
42 TestAddons/serial/GCPAuth/RealCredentials 0
44 TestAddons/parallel/Registry 42.13
46 TestAddons/parallel/Ingress 25.98
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
99 TestFunctional/parallel/DashboardCmd 300.03
103 TestFunctional/parallel/MountCmd 0
106 TestFunctional/parallel/ServiceCmdConnect 12.29
117 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 0.53
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
257 TestGvisorAddon 0
286 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
287 TestISOImage 0
354 TestScheduledStopUnix 0
355 TestSkaffold 0
370 TestStartStop/group/disable-driver-mounts 0.49
401 TestNetworkPlugins/group/cilium 9.85
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1205 06:05:44.175120    8036 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
W1205 06:05:44.275824    8036 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
W1205 06:05:44.381651    8036 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (42.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.6905ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-rd4sn" [ff668873-f5a4-46ff-ae2a-b8025688a8c4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0048753s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-t5tsk" [2e92e592-42c7-4aaa-bdf8-b04b593d2af4] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0056008s
addons_test.go:392: (dbg) Run:  kubectl --context addons-925500 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-925500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-925500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (28.7206408s)
addons_test.go:407: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable registry --alsologtostderr -v=1: (1.2071431s)
--- SKIP: TestAddons/parallel/Registry (42.13s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-925500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-925500 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-925500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [804a7dd4-61eb-4248-89e8-d773365918f5] Pending
helpers_test.go:352: "nginx" [804a7dd4-61eb-4248-89e8-d773365918f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [804a7dd4-61eb-4248-89e8-d773365918f5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0066412s
I1205 06:14:32.843763    8036 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable ingress-dns --alsologtostderr -v=1: (1.6253047s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-925500 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-925500 addons disable ingress --alsologtostderr -v=1: (8.252046s)
--- SKIP: TestAddons/parallel/Ingress (25.98s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-088800 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-088800 --alsologtostderr -v=1] ...
helpers_test.go:519: unable to terminate pid 7068: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-088800 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-088800 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-c5xzq" [b6b9270f-947d-45fb-82e5-9e7f13b5991d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-c5xzq" [b6b9270f-947d-45fb-82e5-9e7f13b5991d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.0071188s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (12.29s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-247800 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-247800 --alsologtostderr -v=1] ...
helpers_test.go:519: unable to terminate pid 11164: Access is denied.
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-451500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-451500
--- SKIP: TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-218000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:43:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:60025
name: kubernetes-upgrade-863300
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:41:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:59776
name: stopped-upgrade-852300
contexts:
- context:
cluster: kubernetes-upgrade-863300
user: kubernetes-upgrade-863300
name: kubernetes-upgrade-863300
- context:
cluster: stopped-upgrade-852300
user: stopped-upgrade-852300
name: stopped-upgrade-852300
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-863300
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300/client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-863300/client.key
- name: stopped-upgrade-852300
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\stopped-upgrade-852300/client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\stopped-upgrade-852300/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                
----------------------- debugLogs end: cilium-218000 [took: 9.3755094s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-218000
--- SKIP: TestNetworkPlugins/group/cilium (9.85s)

                                                
                                    
Copied to clipboard